OpenAI is partnering with Los Alamos National Laboratory, most famous for creating the first atomic bomb, to explore how AI can assist scientific research. The collaboration will evaluate OpenAI’s latest model, GPT-4o, in supporting lab tasks and employing its voice assistant technology to aid scientists. This new initiative is part of OpenAI’s broader efforts to showcase AI’s potential in healthcare and biotech, alongside recent partnerships with companies like Moderna and Color Health.
However, the rapid advancement of AI has sparked concerns about its potential misuse. Lawmakers and tech executives have expressed fears that AI could be used to develop bioweapons. Earlier tests by OpenAI indicated that GPT-4 posed only a slight risk of aiding in creating biological threats.
Erick LeBrun, a research scientist at Los Alamos, emphasised the importance of this partnership in understanding both the benefits and potential dangers of advanced AI. He highlighted the need for a framework to evaluate current and future AI models, particularly concerning biological threats.
OpenAI and Arianna Huffington are teaming up to fund the development of an AI health coach through Thrive AI Health, aiming to personalise health guidance using scientific data and personal health metrics shared by users. The initiative, detailed in a Time magazine op-ed by OpenAI CEO Sam Altman and Huffington, seeks to leverage AI advancements to provide insights and advice across sleep, nutrition, fitness, stress management, and social connection.
DeCarlos Love, a former Google executive with experience in wearables, has been appointed CEO of Thrive AI Health. The company has also formed research partnerships with institutions like Stanford Medicine and the Rockefeller Neuroscience Institute to bolster its AI-driven health coaching capabilities.
While AI-powered health coaches are gaining popularity, concerns over data privacy and the potential for misinformation persist. Thrive AI Health aims to support users with personalised health tips, targeting individuals lacking access to immediate medical advice or specialised dietary guidance.
Why does this matter?
The development of AI in healthcare promises significant advancements, including accelerating drug development and enhancing diagnostic accuracy. However, challenges remain in ensuring the reliability and safety of AI-driven health advice, particularly in maintaining trust and navigating the limitations of AI’s capabilities in medical decision-making.
Microsoft has decided to relinquish its observer seat on OpenAI’s board, a position it took on last year amidst regulatory concerns. The decision comes as OpenAI’s governance has significantly improved over the past eight months. Apple, which was expected to take up the observer role, has chosen not to, according to sources, and did not comment on the matter.
OpenAI plans to engage with strategic partners like Microsoft and Apple through regular stakeholder meetings rather than board observer roles. Microsoft, which invested over $10 billion in OpenAI, cited the startup’s new partnerships, innovations, and growing customer base as reasons for stepping down from the observer position.
At the recent World AI Conference in Shanghai, SenseTime introduced its latest model, SenseNova 5.5, showcasing capabilities comparable to OpenAI’s GPT-4o. This unveiling coincided with OpenAI’s decision to block its services in China, leaving developers scrambling for alternatives.
OpenAI’s move, effective from July 9th, blocks API access from regions where it does not support service, impacting Chinese developers who relied on its tools via virtual private networks. The decision, amid US-China technology tensions, underscores broader concerns about global access to AI technologies.
The ban has prompted Chinese AI companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to offer incentives, including free tokens and migration services, to lure former OpenAI users. Analysts suggest this could accelerate China’s AI development, challenging US dominance in generative AI technologies.
The development has sparked mixed reactions in China, with some viewing it as a move to bolster domestic AI independence amidst geopolitical pressures. However, it also highlights challenges in China’s AI industry, such as reliance on US semiconductors, impacting capabilities like Kuaishou’s AI models.
At the recent World AI Conference in Shanghai, China’s leading AI company, SenseTime, unveiled its latest model, SenseNova 5.5, which can identify objects, provide feedback on drawings, and summarise text. Comparable to OpenAI’s GPT-4, SenseNova 5.5 aims to attract users with 50 million free tokens and free migration support from OpenAI services. The launch of SenseNova 5.5 comes at a crucial time, as OpenAI will block Chinese users from accessing its tools starting 9 July, intensifying the rivalry between US and Chinese AI firms.
OpenAI’s decision to block Chinese users has sparked concern in China’s AI community, raising questions about equitable access to AI technologies. However, it has also created an opportunity for Chinese companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to attract new users with free tokens and migration services, accelerating the development of Chinese AI companies that are already engaged in fierce competition.
Why does this matter?
The US-China tech rivalry has led to US restrictions on exporting advanced semiconductors to China, impacting the AI industry’s growth. While Chinese companies are quickly advancing, the US sanctions are causing shortages in computing capacity, as seen with Kuaishou’s AI model restrictions. Despite these challenges, Chinese commentators view OpenAI’s departure as a chance for China to achieve greater technological self-reliance and independence.
A hacker infiltrated OpenAI’s internal messaging systems last year, stealing details about the design of its AI technologies, according to Reuters’ sources familiar with the matter. The breach involved discussions on an online forum where employees exchanged information about the latest AI developments. Crucially, the hacker needed access to the systems where OpenAI builds and houses its AI.
OpenAI, backed by Microsoft, did not publicly disclose the breach, as no customer or partner information was compromised. Executives briefed employees and the board but did not involve federal law enforcement, believing the hacker had no ties to foreign governments.
In a separate incident, OpenAI reported disrupting five covert operations that aimed to misuse its AI models for deceptive activities online. The issue raised safety concerns and prompted discussions about safeguarding advanced AI technology. The Biden administration plans to implement measures to protect US AI advancements from foreign adversaries. At the same time, 16 AI companies have pledged to develop the technology responsibly amid rapid innovation and emerging risks.
OpenAI’s ChatGPT macOS app was found to be storing user chats in plain text until recently, raising security concerns. The Verge reported that the AI firm has now released an update to encrypt conversations on macOS. The discovery was made by software developer Pedro Vieito, who noted that OpenAI was distributing the app exclusively through their website and bypassing Apple’s sandbox protections.
Sandboxing, which isolates an app and its data from the rest of the system, is optional on macOS, but is commonly used by chat applications to protect sensitive information. By not adhering to this security measure, the ChatGPT app exposed user chats to potential threats. Vieito highlighted the vulnerability on social media, showing how easily another app could access the unprotected data.
OpenAI acknowledged the issue and emphasised that users could opt out of having their chats used to train the AI models. The ChatGPT app, which was made available to macOS users on June 25, now includes encryption to enhance user privacy and security.
A French AI research lab, Kyutai, backed by billionaire Xavier Niel, unveiled a new voice assistant, Moshi, that can express 70 different emotions and styles. Revealed at an event in Paris, Moshi demonstrated capabilities such as offering advice on climbing Mt. Everest and reciting poems with a thick French accent. According to Kyutai’s CEO, Patrick Pérez, this assistant could revolutionise human-machine communication.
Moshi enters a competitive landscape dominated by OpenAI’s ChatGPT and other players like Google and Anthropic. Despite OpenAI’s recent delay in launching a similar voice assistant due to safety concerns, Kyutai plans to release Moshi as open-source technology, allowing free access to its code and research. Such a step aims to foster transparency and collaboration in AI development.
Funded with €300 million and led by former Google DeepMind and Meta Platforms researchers, Kyutai seeks to position Europe as a significant player in AI. During the event, Chief Science Officer Hervé Jégou addressed safety issues, ensuring that tools like indexing and watermarking will track AI-generated audio. The new voice assistant highlights Europe’s potential to advance AI technology globally.
Apple Inc. has secured an observer role on OpenAI’s board, further solidifying their growing partnership. Phil Schiller, head of Apple’s App Store and former marketing chief, will take on this position. As an observer, Schiller will attend board meetings without voting rights or other director powers. The development follows Apple’s announcement of integrating ChatGPT into its devices, such as the iPhone, iPad, and Mac, as part of its AI suite.
Aligning Apple with OpenAI’s principal backer, Microsoft Corp., the observer role offers Apple valuable insights into OpenAI’s decision-making processes. However, Microsoft and Apple’s rivalry might lead to Schiller’s exclusion from certain discussions, particularly those concerning future AI initiatives between OpenAI and Microsoft. Schiller’s extensive experience with Apple’s brand makes him a suitable candidate for this role, despite his lack of direct involvement in Apple’s AI projects.
The partnership with OpenAI is a key part of Apple’s broader AI strategy, which includes a variety of in-house features under Apple Intelligence. These features range from summarising articles and notifications to creating custom emojis and transcribing voice memos. The integration of OpenAI’s chatbot feature will meet current consumer demand, with a paid version of ChatGPT potentially generating App Store fees. No financial transactions are involved; OpenAI gains access to Apple’s vast user base while Apple benefits from the chatbot’s capabilities.
Apple is also in discussions with Alphabet Inc.’s Google, startup Anthropic, and Chinese companies Baidu Inc. and Alibaba Group Holding Ltd. to offer more chatbot options to its customers. Initially, Apple Intelligence will be available in American English, with plans for an international rollout. Furthermore, collbaoration like this marks a rare instance of an Apple executive joining the board of a major partner, highlighting the significance of this partnership in Apple’s AI strategy.
In a unique twist on political campaigning, a Wyoming man named Victor Miller has entered the mayoral race in Cheyenne with an AI bot called ‘VIC.’ Miller, who works at a Laramie County library, sees VIC as a revolutionary tool for improving government transparency and accountability. However, just before a scheduled interview with Fox News Digital, Miller faced a significant setback when OpenAI closed his account, jeopardising his campaign.
Despite this challenge, Miller remains determined to continue promoting VIC, hoping to demonstrate its potential at a public event in Laramie County. He believes that AI technology can streamline government processes and reduce human error, although he is now contemplating whether to declare his reliance on VIC formally. The decision comes as he navigates the restrictions imposed by OpenAI, which cited policy violations related to political campaigning.
Miller’s vision extends beyond his mayoral bid. He has called for support from prominent figures in the AI industry, like Elon Musk, to develop an open-source model that ensures equal access to this emerging technology. His campaign underscores a broader debate about open versus closed AI models, emphasising the need for transparency and fairness in technological advancements.
Wyoming’s legal framework, however, presents additional hurdles. State officials have indicated that candidates must be real persons and use their full names on the ballot. The issue complicates VIC’s candidacy, as the AI bot cannot meet these requirements. Nevertheless, Miller’s innovative approach has sparked conversations about the future role of AI in governance, with similar initiatives emerging globally.