Codex Security expands OpenAI’s push into cybersecurity tools

OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.

The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.

Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.

The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.

Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI legal advice case asks whether ChatGPT crosses legal boundaries

A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.

The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.

The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.

Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.

Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.

Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.

The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT ‘adult mode’ launch delayed as OpenAI focuses on core improvements

OpenAI has postponed the launch of ChatGPT’s ‘adult mode’, a feature designed to let verified adult users access erotica and other mature content.

Teams are focusing on improving intelligence, personality and proactive behaviour instead of releasing the feature immediately.

A feature that was first announced by Sam Altman in October, with an initial December rollout, aiming to allow adults more freedom while maintaining safety for younger users.

The project faced an earlier delay as internal teams prioritised the core ChatGPT experience.

OpenAI stated it still supports the principle of treating adults like adults but warned that achieving the right experience will require more time. No new release date has been provided.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical chatbots spark powerful debate over serious health risks and benefits

Medical chatbots are rapidly becoming part of digital healthcare as technology companies expand AI tools into health services. Companies such as OpenAI and Anthropic are introducing chatbot features designed to answer medical questions using personal data.

Medical chatbots can analyse information from medical records, wearable devices and wellness applications. By incorporating details such as prescriptions, age and prior diagnoses, they aim to provide more personalised responses than a standard internet search.

However, companies stress that these tools are not substitutes for professional medical care. They are not intended to diagnose conditions but rather to summarise results, explain terminology and help users prepare for appointments.

Supporters argue that medical chatbots can improve patient understanding. Experts from the University of California, San Francisco, note that the tools may clarify complex reports and highlight essential health trends when used responsibly.

Despite these benefits, significant limitations remain. AI systems can hallucinate or generate inaccurate advice, and users may struggle to distinguish reliable guidance from subtle errors.

Independent research reinforces these concerns. A 2024 study by the University of Oxford found that participants who used chatbots for hypothetical health scenarios did not make better decisions than those who relied on online searches or personal judgement.

Performance was strong when analysing structured written cases. Yet effectiveness declined during real-world interactions, where communication gaps affected outcomes.

Privacy presents another major issue. Medical chatbots often require users to upload sensitive health information to deliver personalised responses.

Unlike doctors and hospitals, AI companies are not bound by HIPAA, the US federal health privacy law. Although platforms state that data is stored separately and not used to train models, privacy standards differ from those in traditional healthcare.

Experts from Stanford University advise users to understand these differences before sharing medical records. Transparency and informed consent are critical considerations.

Medical chatbots are also inappropriate in emergencies. Individuals experiencing symptoms such as chest pain, shortness of breath or severe headaches should seek immediate medical attention instead of consulting AI tools.

Even in non-urgent cases, specialists recommend maintaining healthy scepticism. Consulting multiple AI systems may provide a form of second opinion, but it does not replace professional medical advice.

Medical chatbots, therefore, represent both opportunity and risk. As their capabilities expand, users must carefully weigh convenience and personalisation against accuracy, oversight and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft strengthen their long-term AI collaboration

Microsoft and OpenAI have reaffirmed their long-standing collaboration after new funding and partnerships raised speculation about their relationship.

Both firms stressed that recent announcements leave their original agreements intact, preserving a framework built on technical integration, trust and shared ambitions for AI development.

Microsoft’s exclusive licence to OpenAI’s intellectual property remains untouched, as does its position as the sole cloud provider for stateless APIs powering OpenAI models.

These APIs can be accessed through either company. Yet all such calls, including those arising from third-party partnerships such as OpenAI’s work with Amazon, continue to run on Azure rather than on alternative clouds. OpenAI’s own products, including Frontier, also stay hosted on Azure.

Revenue-sharing arrangements are unchanged, alongside the contractual definition and evaluation process for artificial general intelligence.

Both companies emphasised that the partnership was designed to allow independent initiatives while preserving deep cooperation across research, engineering and product innovation.

OpenAI retains the freedom to secure additional compute capacity elsewhere, supported by large-scale initiatives such as the Stargate project.

Even with broader collaborations emerging across the industry, both firms present their alliance as central to advancing responsible AI and expanding access to powerful tools worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw creator Peter Steinberger urges playful approach to AI coding

Peter Steinberger, creator of the viral AI agent OpenClaw and now at OpenAI, urged developers to approach AI experimentation with curiosity rather than rigid plans. On the Builders Unscripted podcast, he said progress often comes from exploration rather than expertise.

He said OpenClaw began without a roadmap. Early tests included a WhatsApp integration he paused, expecting major labs to build similar tools. When that did not happen, he developed his own prototype and refined it through real-world use.

Using the tool in low-connectivity environments helped clarify its value. Through trial and iteration, he observed how modern AI models can generate workable solutions without explicit programming, reshaping how developers think about problem-solving and workflows.

He cautioned that coding with AI is a skill that requires practice. Comparing it to learning guitar, Steinberger said early frustration is common, but persistence leads to improved intuition and efficiency over time.

Steinberger argued that developers who focus on solving problems and creating useful tools will remain in demand. Treating AI as a collaborative instrument rather than a shortcut, he said, is essential in a rapidly shifting technology landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!