OpenAI’s project Strawberry: Transformative AI sparks ethical debate

As OpenAI’s Strawberry advances the reasoning capabilities of models the company must face off with its employees who consider such models a threat to humanity.

OpenAI’s recent hire of Irina Kofman from Meta underscores the firm's commitment to bolstering its strategic initiatives in the competitive AI landscape.

According to a Reuters report, the fairly new OpenAI project, Strawberry, is set to create giant waves in the research industry. The project, which some claim could be a renamed version of the company’s project Q* from last year, has been tagged as potentially having capabilities to navigate the net to conduct deep research.

The company’s representative confirmed to the news agency that the reasoning ability of their models will invariably improve with time. Just last Tuesday, employees of OpenAI were treated to a demo of a model with human-like reasoning capabilities. The meeting came on the heels of the negative commentary the company has faced for placing a gag order on employees for publicly exposing the dangers its innovations can potentially pose to humanity.  

Earlier in July, employees sent a seven-page letter to the US Security Exchange Commission (SEC) chair, Gary Gensler, detailing what they deem as risks OpenAI’s projects can pose to humans. The letter was tinged with urgency as the agency was advised to take swift and aggressive action against the company for violating current regulations.