OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical chatbots spark powerful debate over serious health risks and benefits

Medical chatbots are rapidly becoming part of digital healthcare as technology companies expand AI tools into health services. Companies such as OpenAI and Anthropic are introducing chatbot features designed to answer medical questions using personal data.

Medical chatbots can analyse information from medical records, wearable devices and wellness applications. By incorporating details such as prescriptions, age and prior diagnoses, they aim to provide more personalised responses than a standard internet search.

However, companies stress that these tools are not substitutes for professional medical care. They are not intended to diagnose conditions but rather to summarise results, explain terminology and help users prepare for appointments.

Supporters argue that medical chatbots can improve patient understanding. Experts from the University of California, San Francisco, note that the tools may clarify complex reports and highlight essential health trends when used responsibly.

Despite these benefits, significant limitations remain. AI systems can hallucinate or generate inaccurate advice, and users may struggle to distinguish reliable guidance from subtle errors.

Independent research reinforces these concerns. A 2024 study by the University of Oxford found that participants who used chatbots for hypothetical health scenarios did not make better decisions than those who relied on online searches or personal judgement.

Performance was strong when analysing structured written cases. Yet effectiveness declined during real-world interactions, where communication gaps affected outcomes.

Privacy presents another major issue. Medical chatbots often require users to upload sensitive health information to deliver personalised responses.

Unlike doctors and hospitals, AI companies are not bound by HIPAA, the US federal health privacy law. Although platforms state that data is stored separately and not used to train models, privacy standards differ from those in traditional healthcare.

Experts from Stanford University advise users to understand these differences before sharing medical records. Transparency and informed consent are critical considerations.

Medical chatbots are also inappropriate in emergencies. Individuals experiencing symptoms such as chest pain, shortness of breath or severe headaches should seek immediate medical attention instead of consulting AI tools.

Even in non-urgent cases, specialists recommend maintaining healthy scepticism. Consulting multiple AI systems may provide a form of second opinion, but it does not replace professional medical advice.

Medical chatbots, therefore, represent both opportunity and risk. As their capabilities expand, users must carefully weigh convenience and personalisation against accuracy, oversight and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft strengthen their long-term AI collaboration

Microsoft and OpenAI have reaffirmed their long-standing collaboration after new funding and partnerships raised speculation about their relationship.

Both firms stressed that recent announcements leave their original agreements intact, preserving a framework built on technical integration, trust and shared ambitions for AI development.

Microsoft’s exclusive licence to OpenAI’s intellectual property remains untouched, as does its position as the sole cloud provider for stateless APIs powering OpenAI models.

These APIs can be accessed through either company. Yet all such calls, including those arising from third-party partnerships such as OpenAI’s work with Amazon, continue to run on Azure rather than on alternative clouds. OpenAI’s own products, including Frontier, also stay hosted on Azure.

Revenue-sharing arrangements are unchanged, alongside the contractual definition and evaluation process for artificial general intelligence.

Both companies emphasised that the partnership was designed to allow independent initiatives while preserving deep cooperation across research, engineering and product innovation.

OpenAI retains the freedom to secure additional compute capacity elsewhere, supported by large-scale initiatives such as the Stargate project.

Even with broader collaborations emerging across the industry, both firms present their alliance as central to advancing responsible AI and expanding access to powerful tools worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw creator Peter Steinberger urges playful approach to AI coding

Peter Steinberger, creator of the viral AI agent OpenClaw and now at OpenAI, urged developers to approach AI experimentation with curiosity rather than rigid plans. On the Builders Unscripted podcast, he said progress often comes from exploration rather than expertise.

He said OpenClaw began without a roadmap. Early tests included a WhatsApp integration he paused, expecting major labs to build similar tools. When that did not happen, he developed his own prototype and refined it through real-world use.

Using the tool in low-connectivity environments helped clarify its value. Through trial and iteration, he observed how modern AI models can generate workable solutions without explicit programming, reshaping how developers think about problem-solving and workflows.

He cautioned that coding with AI is a skill that requires practice. Comparing it to learning guitar, Steinberger said early frustration is common, but persistence leads to improved intuition and efficiency over time.

Steinberger argued that developers who focus on solving problems and creating useful tools will remain in demand. Treating AI as a collaborative instrument rather than a shortcut, he said, is essential in a rapidly shifting technology landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI faces legal action in South Korea from top networks

South Korea’s leading terrestrial broadcasters have filed a lawsuit against OpenAI, claiming that the company trained its ChatGPT model using their news content without permission. KBS, MBC, and SBS are seeking an injunction to halt the alleged infringement and to recover damages.

The Korea Broadcasters Association said OpenAI generates significant revenue from its GPT services and has licensing agreements with media organisations worldwide.

Despite this, the company has refused to negotiate with the South Korean networks, leaving them without recourse to ensure proper use of their content.

The lawsuit emphasises the protection of intellectual property and creators’ rights, arguing that domestic copyright holders face high legal costs and barriers when confronting global technology companies. It also raises broader questions about South Korea’s data sovereignty in the age of AI.

Earlier action against Naver set a precedent for copyright enforcement in AI applications.

Although KBS subsequently partnered with Naver for AI-driven media solutions, the current case underscores continuing disputes over lawful access to broadcast content for generative AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI model revises proof claim

OpenAI has published its attempts to solve all 10 problems in the First Proof challenge, a research-level maths test designed to assess whether AI can produce checkable, domain-specific proofs. Leading experts created the issues and require extended reasoning rather than short answers.

The company said at least five of its proof attempts are likely correct following expert feedback, although one previously confident submission has now been judged incorrect. Several other attempts remain under review as specialists continue to assess the arguments.

According to OpenAI, the evaluation involved limited human supervision, with researchers sometimes prompting the model to refine or clarify reasoning. The process included exchanges between an internal model and ChatGPT for verification, formatting and style adjustments.

OpenAI described frontier research challenges, such as First Proof, as crucial for testing next-generation AI systems. The company said it plans to deepen its engagement with academics to develop more rigorous evaluation frameworks for research-grade reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EVMbench from OpenAI, Paradigm and OtterSec measures AI smart contract risks

OpenAI, with Paradigm and OtterSec, introduced EVMbench to test how AI agents detect, patch, and exploit smart contract flaws. The benchmark draws on 120 real vulnerabilities from 40 blockchain projects to better reflect live conditions.

Researchers report that leading agents can now discover and exploit end-to-end vulnerabilities in live blockchain instances. Over six months, exploit success rates rose sharply, prompting both praise for improved auditing capabilities and concern over the rapid scaling of offensive skills.

EVMbench evaluates agents across three modes: detect, patch, and exploit. Each stage reflects increasing technical complexity and mirrors the responsibilities faced in production blockchain environments, where contracts are often immutable, and errors can lead to irreversible losses.

Recent incidents underline the stakes. A vulnerability in AI-generated Solidity code reportedly mispriced an asset, triggering liquidations and losses. Such cases highlight the risks of deploying AI-written financial logic without rigorous human review and governance safeguards.

While EVMbench advances measurement of AI capabilities, it remains limited to curated vulnerabilities and sandboxed conditions. As blockchain adoption expands and criminal misuse evolves, researchers stress the need for responsible AI development alongside stronger innovative contract security practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!