OpenAI launches GPT‑5.2 for professional knowledge work

OpenAI has introduced GPT‑5.2, its most advanced model series to date, designed to enhance professional knowledge work. Users report significant time savings, with daily reductions of 40-60 minutes and more than 10 hours per week for heavy users.

The new model excels at generating spreadsheets, presentations, and code, while also handling complex, multi-step projects with improved speed and accuracy.

Performance benchmarks show GPT‑5.2 surpasses industry professionals on GDPval tasks across 44 occupations, producing outputs over eleven times faster and at a fraction of the cost.

Coding abilities have also reached a new standard, encompassing debugging, refactoring, front-end UI work, and multi-language software engineering tasks, providing engineers with a more reliable daily assistant.

GPT‑5.2 Thinking improves long-context reasoning, vision, and tool-calling capabilities. It accurately interprets long documents, charts, and graphical interfaces while coordinating multi-agent workflows.

The model also demonstrates enhanced factual accuracy and fewer hallucinations, making it more dependable for research, analysis, and decision-making.

The rollout includes ChatGPT Instant, Thinking, and Pro plans, as well as API access for developers. Early tests show GPT‑5.2 accelerates research, solves complex problems, and improves professional workflows, setting a new benchmark for real-world AI tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI launches Agentic AI Foundation with industry partners

The US AI company, OpenAI, has co-founded the Agentic AI Foundation (AAIF) under the Linux Foundation alongside Anthropic, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare.

A foundation that aims to provide neutral stewardship for open, interoperable agentic AI infrastructure as systems move from experimental prototypes into real-world applications.

The initiative includes the donation of OpenAI’s AGENTS.md, a lightweight Markdown file designed to provide agents with project-specific instructions and context.

Since its release in August 2025, AGENTS.md has been adopted by more than 60,000 open-source projects, ensuring consistent behaviour across diverse repositories and frameworks. Contributions from Anthropic and Block will include the Model Context Protocol and the goose project, respectively.

By establishing AAIF, the co-founders intend to prevent ecosystem fragmentation and foster safe, portable, and interoperable agentic AI systems.

The foundation provides a shared platform for development, governance, and extension of open standards, with oversight by the Linux Foundation to guarantee neutral, long-term stewardship.

OpenAI emphasises that the foundation will support developers, enterprises, and the wider open-source community, inviting contributors to help shape agentic AI standards.

The AAIF reflects a collaborative effort to advance agentic AI transparently and in the public interest while promoting innovation across tools and platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instacart deepens partnership with OpenAI for real-time AI shopping

OpenAI and Instacart are expanding their longstanding collaboration by introducing a fully integrated grocery shopping experience inside ChatGPT.

Users can receive meal inspiration, browse products and place orders in one continuous conversation instead of switching across separate platforms.

A service that brings together Instacart’s real-time retail network with OpenAI’s most advanced models to produce an experience that feels like a direct link between a simple request and completed delivery.

The Instacart app becomes the first service to offer a full checkout flow inside ChatGPT by using the Agentic Commerce Protocol. When users mention food, ingredients or recipe ideas, ChatGPT can surface the app immediately.

Once the user connects an Instacart account, the system selects suitable items from nearby retailers and builds a complete cart that can be reviewed before payment. Users then pay securely inside the chat while Instacart manages collection and delivery through its established network.

The update also reflects broader cooperation between the two companies. Instacart continues to rely on OpenAI APIs to support personalised suggestions and real time guidance across its customer experience.

ChatGPT Enterprise assists internal teams, while Codex powers an internal coding agent that shortens development cycles instead of slowing them down with manual tasks. The partnership builds on Instacart’s early involvement in the Operator research preview, where it helped refine emerging agentic technologies.

A renewed partnership that strengthens OpenAI’s growing enterprise ecosystem. The company already works with major global brands across sectors such as retail, financial services and telecommunications.

The Instacart integration offers a view of how conversational agents may act as a bridge between everyday intent and immediate real-world action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia seals $4.6 billion deal for new AI hub

OpenAI has partnered with Australian data centre operator NextDC to build a major AI campus in western Sydney. The companies signed an agreement covering development, planning and long-term operation of the vast site.

NextDC said the project will include a supercluster of graphics processors to support advanced AI workloads. Both firms intend to create infrastructure capable of meeting rapid global demand for high-performance computing.

Australia estimates the development at A$7 billion and forecasts thousands of jobs during construction and ongoing roles across engineering and operations. Officials say the initiative aligns with national efforts to strengthen technological capability.

Plans feature renewable energy procurement and cooling systems that avoid drinking water use, addressing sustainability concerns. Treasurer Jim Chalmers said the project reflects growing confidence in Australia’s talent, clean energy capacity and emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches nationwide AI initiative in Australia

OpenAI has launched OpenAI for Australia, a nationwide initiative to unlock the economic and societal benefits of AI. The program aims to support sovereign AI infrastructure, upskill Australians, and accelerate the country’s local AI ecosystem.

CEO Sam Altman highlighted Australia’s deep technical talent and strong institutions as key factors in becoming a global leader in AI.

A significant partnership with NEXTDC will see the development of a next-generation hyperscale AI campus and large GPU supercluster at Sydney’s Eastern Creek S7 site.

The project is expected to create thousands of jobs, boost local supplier opportunities, strengthen STEM and AI skills, and provide sovereign compute capacity for critical workloads.

OpenAI will also upskill more than 1.2 million Australians in collaboration with CommBank, Coles and Wesfarmers. OpenAI Academy will provide tailored modules to give workers and small business owners practical AI skills for confident daily use.

The nationwide rollout of courses is scheduled to begin in 2026.

OpenAI is launching its first Australian start-up program with local venture capital firms Blackbird, Square Peg, and AirTree to support home-grown innovation. Start-ups will receive API credits, mentorship, workshops, and access to Founder Day to accelerate product development and scale AI solutions locally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!