OpenAI considers developing own AI chip with Broadcom

OpenAI, the maker of ChatGPT, is in discussions with Broadcom and other chip designers about developing a new AI chip. This move aims to address the shortage of expensive graphic processing units required for developing its AI models, such as ChatGPT, GPT-4, and DALL-E3.

The Microsoft-backed company is hiring former Google employees who developed the tech giant’s own AI chip and plans to create an AI server chip. OpenAI is exploring the idea of making its own AI chips to ensure a more stable supply of essential components.

OpenAI CEO Sam Altman has ambitious plans to raise billions of dollars to establish semiconductor manufacturing facilities. Potential partners for this venture include Intel, Taiwan Semiconductor Manufacturing Co, and Samsung Electronics.

A spokesperson for OpenAI mentioned that the company is having ongoing conversations with industry and government stakeholders to enhance access to the infrastructure needed for making AI benefits widely accessible.

OpenAI whistleblowers call for SEC investigation

Whistle-blowers have filed a complaint with the US Securities and Exchange Commission (SEC) against OpenAI, calling for an investigation into the company’s allegedly restrictive non-disclosure agreements (NDAs). The complaint, alleges that OpenAI’s NDAs required employees to waive their federal rights to whistle-blower compensation, creating a chilling effect on their right to speak up.

Senator Chuck Grassley’s office provided the letter to Reuters, stating that OpenAI’s policies appear to prevent whistleblowers from receiving due compensation for their protected disclosures. The whistle-blowers have requested that the SEC fine OpenAI for each improper agreement and review all contracts containing NDAs, including employment, severance, and investor agreements. OpenAI did not immediately respond to requests for comment.

This complaint follows other legal and regulatory challenges faced by OpenAI. The company has been sued for allegedly stealing people’s data, and US authorities have called for companies to ensure their AI products do not violate civil rights. OpenAI recently formed a Safety and Security Committee to address safety concerns as it begins training its next AI model.

OpenAI’s project Strawberry: Transformative AI sparks ethical debate

According to a Reuters report, the fairly new OpenAI project, Strawberry, is set to create giant waves in the research industry. The project, which some claim could be a renamed version of the company’s project Q* from last year, has been tagged as potentially having capabilities to navigate the net to conduct deep research.

The company’s representative confirmed to the news agency that the reasoning ability of their models will invariably improve with time. Just last Tuesday, employees of OpenAI were treated to a demo of a model with human-like reasoning capabilities. The meeting came on the heels of the negative commentary the company has faced for placing a gag order on employees for publicly exposing the dangers its innovations can potentially pose to humanity.  

Earlier in July, employees sent a seven-page letter to the US Security Exchange Commission (SEC) chair, Gary Gensler, detailing what they deem as risks OpenAI’s projects can pose to humans. The letter was tinged with urgency as the agency was advised to take swift and aggressive action against the company for violating current regulations.

OpenAI introduces a five-tier system to measure AI progress

OpenAI has launched a five-tier system to measure its progress towards developing AI that can surpass human performance. The new classification aims to provide clearer insights into the company’s approach to AI safety and future goals. The system, unveiled to employees during an all-hands meeting, outlines stages from conversational AI to advanced AI that are capable of running an entire organisation.

Currently, OpenAI is at the first level but is approaching the second stage, called ‘Reasoners.’ That level represents AI systems that can perform basic problem-solving tasks comparable to a human with a doctorate but without additional tools. During the meeting, leadership showcased a research project involving the GPT-4 model, demonstrating new capabilities that exhibit human-like reasoning.

The five-tier framework is still a work in progress, with plans to gather feedback from employees, investors, and the board. OpenAI’s ultimate goal is to create artificial general intelligence (AGI), which involves developing AI that outperforms humans in most tasks. CEO Sam Altman remains optimistic that AGI could be achieved within this decade.

OpenAI and Los Alamos collaborate on AI research

OpenAI is partnering with Los Alamos National Laboratory, most famous for creating the first atomic bomb, to explore how AI can assist scientific research. The collaboration will evaluate OpenAI’s latest model, GPT-4o, in supporting lab tasks and employing its voice assistant technology to aid scientists. This new initiative is part of OpenAI’s broader efforts to showcase AI’s potential in healthcare and biotech, alongside recent partnerships with companies like Moderna and Color Health.

However, the rapid advancement of AI has sparked concerns about its potential misuse. Lawmakers and tech executives have expressed fears that AI could be used to develop bioweapons. Earlier tests by OpenAI indicated that GPT-4 posed only a slight risk of aiding in creating biological threats.

Erick LeBrun, a research scientist at Los Alamos, emphasised the importance of this partnership in understanding both the benefits and potential dangers of advanced AI. He highlighted the need for a framework to evaluate current and future AI models, particularly concerning biological threats.

OpenAI and Arianna Huffington fund AI health coach development

OpenAI and Arianna Huffington are teaming up to fund the development of an AI health coach through Thrive AI Health, aiming to personalise health guidance using scientific data and personal health metrics shared by users. The initiative, detailed in a Time magazine op-ed by OpenAI CEO Sam Altman and Huffington, seeks to leverage AI advancements to provide insights and advice across sleep, nutrition, fitness, stress management, and social connection.

DeCarlos Love, a former Google executive with experience in wearables, has been appointed CEO of Thrive AI Health. The company has also formed research partnerships with institutions like Stanford Medicine and the Rockefeller Neuroscience Institute to bolster its AI-driven health coaching capabilities.

While AI-powered health coaches are gaining popularity, concerns over data privacy and the potential for misinformation persist. Thrive AI Health aims to support users with personalised health tips, targeting individuals lacking access to immediate medical advice or specialised dietary guidance.

Why does this matter?

The development of AI in healthcare promises significant advancements, including accelerating drug development and enhancing diagnostic accuracy. However, challenges remain in ensuring the reliability and safety of AI-driven health advice, particularly in maintaining trust and navigating the limitations of AI’s capabilities in medical decision-making.

Microsoft steps down from OpenAI board

Microsoft has decided to relinquish its observer seat on OpenAI’s board, a position it took on last year amidst regulatory concerns. The decision comes as OpenAI’s governance has significantly improved over the past eight months. Apple, which was expected to take up the observer role, has chosen not to, according to sources, and did not comment on the matter.

OpenAI plans to engage with strategic partners like Microsoft and Apple through regular stakeholder meetings rather than board observer roles. Microsoft, which invested over $10 billion in OpenAI, cited the startup’s new partnerships, innovations, and growing customer base as reasons for stepping down from the observer position.

While the EU regulators have stated that the Microsoft-OpenAI partnership does not fall under merger rules, they are reviewing exclusivity clauses in the agreement. However, British and US antitrust authorities continue to scrutinise Microsoft’s influence over OpenAI. To address these concerns and diversify its AI offerings, Microsoft is expanding its AI technology on the Azure platform and has hired Inflection’s CEO to lead its consumer AI division.

Chinese AI companies react to OpenAI block with SenseNova 5.5

At the recent World AI Conference in Shanghai, SenseTime introduced its latest model, SenseNova 5.5, showcasing capabilities comparable to OpenAI’s GPT-4o. This unveiling coincided with OpenAI’s decision to block its services in China, leaving developers scrambling for alternatives.

OpenAI’s move, effective from July 9th, blocks API access from regions where it does not support service, impacting Chinese developers who relied on its tools via virtual private networks. The decision, amid US-China technology tensions, underscores broader concerns about global access to AI technologies.

The ban has prompted Chinese AI companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to offer incentives, including free tokens and migration services, to lure former OpenAI users. Analysts suggest this could accelerate China’s AI development, challenging US dominance in generative AI technologies.

The development has sparked mixed reactions in China, with some viewing it as a move to bolster domestic AI independence amidst geopolitical pressures. However, it also highlights challenges in China’s AI industry, such as reliance on US semiconductors, impacting capabilities like Kuaishou’s AI models.

OpenAI blocks Chinese users amid growing tech rivalry

At the recent World AI Conference in Shanghai, China’s leading AI company, SenseTime, unveiled its latest model, SenseNova 5.5, which can identify objects, provide feedback on drawings, and summarise text. Comparable to OpenAI’s GPT-4, SenseNova 5.5 aims to attract users with 50 million free tokens and free migration support from OpenAI services. The launch of SenseNova 5.5 comes at a crucial time, as OpenAI will block Chinese users from accessing its tools starting 9 July, intensifying the rivalry between US and Chinese AI firms.

OpenAI’s decision to block Chinese users has sparked concern in China’s AI community, raising questions about equitable access to AI technologies. However, it has also created an opportunity for Chinese companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to attract new users with free tokens and migration services, accelerating the development of Chinese AI companies that are already engaged in fierce competition.

Why does this matter?

The US-China tech rivalry has led to US restrictions on exporting advanced semiconductors to China, impacting the AI industry’s growth. While Chinese companies are quickly advancing, the US sanctions are causing shortages in computing capacity, as seen with Kuaishou’s AI model restrictions. Despite these challenges, Chinese commentators view OpenAI’s departure as a chance for China to achieve greater technological self-reliance and independence.

Hacker steals AI design details from OpenAI

A hacker infiltrated OpenAI’s internal messaging systems last year, stealing details about the design of its AI technologies, according to Reuters’ sources familiar with the matter. The breach involved discussions on an online forum where employees exchanged information about the latest AI developments. Crucially, the hacker needed access to the systems where OpenAI builds and houses its AI.

OpenAI, backed by Microsoft, did not publicly disclose the breach, as no customer or partner information was compromised. Executives briefed employees and the board but did not involve federal law enforcement, believing the hacker had no ties to foreign governments.

In a separate incident, OpenAI reported disrupting five covert operations that aimed to misuse its AI models for deceptive activities online. The issue raised safety concerns and prompted discussions about safeguarding advanced AI technology. The Biden administration plans to implement measures to protect US AI advancements from foreign adversaries. At the same time, 16 AI companies have pledged to develop the technology responsibly amid rapid innovation and emerging risks.