OpenAI buys Jony Ive’s AI hardware firm

OpenAI has acquired hardware startup io Products, founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will now join the company as creative head, aiming to craft cutting-edge hardware for the era of generative AI.

The move signals OpenAI’s intention to build its own hardware platform instead of relying on existing ecosystems like Apple’s iOS or Google’s Android. By doing so, the firm plans to fuse its AI technology, including ChatGPT, with original physical products designed entirely in-house.

Jony Ive, the designer behind iconic Apple devices such as the iPhone and iMac, had already been collaborating with OpenAI through his firm LoveFrom for the past two years. Their shared ambition is to create hardware that redefines how people interact with AI.

While exact details remain under wraps, OpenAI CEO Sam Altman and Ive have teased that a prototype is in development, described as potentially ‘the coolest piece of technology the world has ever seen’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge stronger safeguards as jailbroken chatbots leak illegal data

Hacked AI-powered chatbots pose serious security risks by revealing illicit knowledge the models absorbed during training, according to researchers at Ben Gurion University.

Their study highlights how ‘jailbroken’ large language models (LLMs) can be manipulated to produce dangerous instructions, such as how to hack networks, manufacture drugs, or carry out other illegal activities.

The chatbots, including those powered by models from companies like OpenAI, Google, and Anthropic, are trained on vast internet data sets. While attempts are made to exclude harmful material, AI systems may still internalize sensitive information.

Safety controls are meant to block the release of this knowledge, but researchers demonstrated how it could be bypassed using specially crafted prompts.

The researchers developed a ‘universal jailbreak’ capable of compromising multiple leading LLMs. Once bypassed, the chatbots consistently responded to queries that should have triggered safeguards.

They found some AI models openly advertised online as ‘dark LLMs,’ designed without ethical constraints and willing to generate responses that support fraud or cybercrime.

Professor Lior Rokach and Dr Michael Fire, who led the research, said the growing accessibility of this technology lowers the barrier for malicious use. They warned that dangerous knowledge could soon be accessed by anyone with a laptop or phone.

Despite notifying AI providers about the jailbreak method, the researchers say the response was underwhelming. Some companies dismissed the concerns as outside the scope of bug bounty programs, while others did not respond.

The report calls on tech companies to improve their models’ security by screening training data, using advanced firewalls, and developing methods for machine ‘unlearning’ to help remove illicit content. Experts also called for clearer safety standards and independent oversight.

OpenAI said its latest models have improved resilience to jailbreaks, and Microsoft linked to its recent safety initiatives. Other companies have not yet commented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI outperforms humans in debate persuasiveness

AI can be more persuasive than humans in debates, especially when given access to personal information, a new study finds. Scientists warn this capability could be exploited in politics and misinformation campaigns.

Researchers discovered that ChatGPT-4 changed opinions more effectively than human opponents in 64% of cases when it was able to tailor arguments using details like age, gender, and political views.

The experiments involved over 600 debates on topics ranging from school uniforms to abortion, with participants randomly assigned a stance. AI structured and adaptive communication style made it especially influential on people without strong pre-existing views.

While participants often identified when they were debating a machine, that did little to weaken the AI’s persuasive edge. Experts say this raises urgent questions about the role of AI in shaping public opinion, particularly during elections.

Though there may be benefits, such as promoting healthier behaviours or reducing polarisation, concerns about radicalisation and manipulation remain dominant. Researchers urge regulators to act swiftly to address potential abuses before they become widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches advanced coding assistant Codex

OpenAI has launched Codex, a new AI coding agent designed to streamline software development by automating routine tasks and improving code reliability.

Built on a version of its o3 model known as codex-1, the agent uses reinforcement learning to generate high-quality code and test it before output.

Codex operates in a secure, cloud-based sandbox that mirrors a user’s environment and integrates with GitHub for real-time access to repositories.

It logs every step, provides test results, and supports customisation through AGENTS.md files, allowing developers to guide the AI.

Currently available to ChatGPT Pro, Enterprise, and Team subscribers, Codex is being piloted by major firms like Cisco, Superhuman, and Kodiak.

OpenAI plans wider access and future upgrades for more complex, asynchronous collaboration, though limitations like lack of image input support remain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI, G42 plan world’s largest AI data facility

OpenAI is reportedly set to become the anchor tenant in a 5-gigawatt data centre project in Abu Dhabi, part of what could become one of the largest AI infrastructure builds globally, according to Bloomberg.

The facility, spanning approximately 10 square miles, is being developed by UAE-based tech firm G42 as part of OpenAI’s broader Stargate initiative, a joint venture announced with SoftBank and Oracle to establish high-capacity AI data centres worldwide.

While OpenAI’s first Stargate facility in Texas is projected to reach 1.2 gigawatts, the Abu Dhabi project would more than quadruple that. The planned scale would consume power equivalent to five nuclear reactors.

OpenAI and G42 have collaborated since 2023 to accelerate AI adoption in the Middle East. The partnership has sparked concerns among US officials, particularly around G42’s past ties to Chinese firms, including Huawei and BGI.

G42 has since pledged to divest from China and shift its focus. In early 2024, Microsoft invested $1.5 billion in G42, and company president Brad Smith joined its board, reinforcing US–UAE tech ties. An official statement from OpenAI on the project is still pending.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CoreWeave shares rebound after $4B OpenAI partnership announcement

Shares of AI cloud infrastructure company CoreWeave recovered on Thursday, gaining around 3% after the firm announced an expanded partnership with OpenAI worth up to $4 billion.

The deal helped ease investor concerns following the company’s earlier dip in trading.

CoreWeave stock had fallen as much as 9.1% earlier in the day after the company projected annual capital expenditures for 2025 would be roughly four times higher than expected revenue.

The forecast was included in CoreWeave’s first earnings report since going public in March.

The expanded agreement with OpenAI appears to have lifted investor sentiment, offsetting concerns about the company’s aggressive spending strategy as it builds out its AI-focused cloud infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches AI safety hub

OpenAI has launched a public online hub to share internal safety evaluations of its AI models, aiming to increase transparency around harmful content, jailbreaks, and hallucination risks. The hub will be updated after major model changes, allowing the public to track progress in safety and reliability over time.

The move follows growing criticism about the company’s testing methods, especially after inappropriate ChatGPT responses surfaced in late 2023. Instead of waiting for backlash, OpenAI is now introducing an optional alpha testing phase, letting users provide feedback before wider model releases.

The hub also marks a departure from the company’s earlier stance on secrecy. In 2019, OpenAI withheld GPT-2 over misuse concerns. Since then, it has shifted towards transparency by forming safety-focused teams and responding to calls for open safety metrics.

OpenAI’s approach appears timely, as several countries are building AI Safety Institutes to evaluate models before launch. Instead of relying on private sector efforts alone, the global landscape now reflects a multi-stakeholder push to create stronger safety standards and governance for advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvey adds Google and Anthropic AI

Harvey, the fast-growing legal AI startup backed early by the OpenAI Startup Fund, is now embracing foundation models from Google and Anthropic instead of relying solely on OpenAI’s.

In a recent blog post, the company said it would expand its AI model options after internal benchmarks showed that different tools excel at different legal tasks.

The shift marks a notable win for OpenAI’s competitors, even though Harvey insists it’s not abandoning OpenAI. Its in-house benchmark, BigLaw, revealed that several non-OpenAI models now outperform Harvey’s original system on specific legal functions.

For instance, Google’s Gemini 2.5 Pro performs well at legal drafting, while OpenAI’s o3 and Anthropic’s Claude 3.7 Sonnet are better suited for complex pre-trial work.

Instead of building its own models, Harvey now aims to fine-tune top-tier offerings from multiple vendors, including through Amazon’s cloud. The company also plans to launch a public legal benchmark leaderboard, combining expert legal reviews with technical metrics.

While OpenAI remains a close partner and investor, Harvey’s broader strategy signals growing competition in the race to serve the legal industry with AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI backs away from for-profit transition amid scrutiny

OpenAI has announced it will no longer pursue a full transition to a for-profit company. Instead, it will restructure its commercial arm as a public benefit corporation (PBC), retaining oversight by its nonprofit board.

The move comes after discussions with the attorneys general of California and Delaware, and growing concerns about governance and mission drift. The nonprofit board—best known for briefly removing CEO Sam Altman—will continue to oversee the company and appoint the PBC board.

Investors will now hold regular, uncapped equity in the PBC, replacing the previous 100x return cap, a change designed to attract future funding. The nonprofit will also gain a growing equity stake in the business arm.

In a message to staff, Altman said OpenAI remains committed to building AI that benefits humanity and sees this structure as the best path forward. Critics, including former staff, say questions remain about technology ownership and long-term priorities.

At the same time, Meta is positioning itself as a major rival. It recently launched a standalone AI assistant app, powered by its Llama 4 model and available across platforms including Ray-Ban smart glasses. The app includes a social Discover feed, encouraging interaction with shared AI outputs.

OpenAI’s new structure attempts to balance commercial growth with ethical governance—a model that may influence how other AI firms approach funding, control, and public accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and OpenAI rework billion dollar deal

OpenAI and Microsoft are renegotiating the terms of their multibillion-dollar partnership in a move designed to allow the ChatGPT maker to pursue a future public listing, while ensuring Microsoft retains access to its most advanced AI technology.

According to the Financial Times, the talks are centred around adjusting Microsoft’s equity stake in OpenAI’s for-profit arm.

The software giant has invested over US$13 billion in OpenAI and is reportedly prepared to reduce its stake in exchange for extended access to AI developments beyond the current 2030 agreement.

The revisions also include changes to a broader agreement first established in 2019 when Microsoft committed US$1 billion to the partnership.

The restructuring reflects OpenAI’s shift in strategy as it prepares for potential independence from its largest investor. Recent reports suggest the company plans to share a smaller portion of its future revenue with Microsoft, instead of maintaining current terms.

Microsoft has declined to comment on the ongoing negotiations, and OpenAI has yet to respond.

The talks follow Microsoft’s separate US$500 billion joint venture with Oracle and SoftBank to build AI data centres in the US, further signalling the strategic value of securing long-term access to cutting-edge models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!