OpenAI’s ChatGPT macOS app was found to be storing user chats in plain text until recently, raising security concerns. The Verge reported that the AI firm has now released an update to encrypt conversations on macOS. The discovery was made by software developer Pedro Vieito, who noted that OpenAI was distributing the app exclusively through their website and bypassing Apple’s sandbox protections.
Sandboxing, which isolates an app and its data from the rest of the system, is optional on macOS, but is commonly used by chat applications to protect sensitive information. By not adhering to this security measure, the ChatGPT app exposed user chats to potential threats. Vieito highlighted the vulnerability on social media, showing how easily another app could access the unprotected data.
OpenAI acknowledged the issue and emphasised that users could opt out of having their chats used to train the AI models. The ChatGPT app, which was made available to macOS users on June 25, now includes encryption to enhance user privacy and security.
A French AI research lab, Kyutai, backed by billionaire Xavier Niel, unveiled a new voice assistant, Moshi, that can express 70 different emotions and styles. Revealed at an event in Paris, Moshi demonstrated capabilities such as offering advice on climbing Mt. Everest and reciting poems with a thick French accent. According to Kyutai’s CEO, Patrick Pérez, this assistant could revolutionise human-machine communication.
Moshi enters a competitive landscape dominated by OpenAI’s ChatGPT and other players like Google and Anthropic. Despite OpenAI’s recent delay in launching a similar voice assistant due to safety concerns, Kyutai plans to release Moshi as open-source technology, allowing free access to its code and research. Such a step aims to foster transparency and collaboration in AI development.
Funded with €300 million and led by former Google DeepMind and Meta Platforms researchers, Kyutai seeks to position Europe as a significant player in AI. During the event, Chief Science Officer Hervé Jégou addressed safety issues, ensuring that tools like indexing and watermarking will track AI-generated audio. The new voice assistant highlights Europe’s potential to advance AI technology globally.
Apple Inc. has secured an observer role on OpenAI’s board, further solidifying their growing partnership. Phil Schiller, head of Apple’s App Store and former marketing chief, will take on this position. As an observer, Schiller will attend board meetings without voting rights or other director powers. The development follows Apple’s announcement of integrating ChatGPT into its devices, such as the iPhone, iPad, and Mac, as part of its AI suite.
Aligning Apple with OpenAI’s principal backer, Microsoft Corp., the observer role offers Apple valuable insights into OpenAI’s decision-making processes. However, Microsoft and Apple’s rivalry might lead to Schiller’s exclusion from certain discussions, particularly those concerning future AI initiatives between OpenAI and Microsoft. Schiller’s extensive experience with Apple’s brand makes him a suitable candidate for this role, despite his lack of direct involvement in Apple’s AI projects.
The partnership with OpenAI is a key part of Apple’s broader AI strategy, which includes a variety of in-house features under Apple Intelligence. These features range from summarising articles and notifications to creating custom emojis and transcribing voice memos. The integration of OpenAI’s chatbot feature will meet current consumer demand, with a paid version of ChatGPT potentially generating App Store fees. No financial transactions are involved; OpenAI gains access to Apple’s vast user base while Apple benefits from the chatbot’s capabilities.
Apple is also in discussions with Alphabet Inc.’s Google, startup Anthropic, and Chinese companies Baidu Inc. and Alibaba Group Holding Ltd. to offer more chatbot options to its customers. Initially, Apple Intelligence will be available in American English, with plans for an international rollout. Furthermore, collbaoration like this marks a rare instance of an Apple executive joining the board of a major partner, highlighting the significance of this partnership in Apple’s AI strategy.
In a unique twist on political campaigning, a Wyoming man named Victor Miller has entered the mayoral race in Cheyenne with an AI bot called ‘VIC.’ Miller, who works at a Laramie County library, sees VIC as a revolutionary tool for improving government transparency and accountability. However, just before a scheduled interview with Fox News Digital, Miller faced a significant setback when OpenAI closed his account, jeopardising his campaign.
Despite this challenge, Miller remains determined to continue promoting VIC, hoping to demonstrate its potential at a public event in Laramie County. He believes that AI technology can streamline government processes and reduce human error, although he is now contemplating whether to declare his reliance on VIC formally. The decision comes as he navigates the restrictions imposed by OpenAI, which cited policy violations related to political campaigning.
Miller’s vision extends beyond his mayoral bid. He has called for support from prominent figures in the AI industry, like Elon Musk, to develop an open-source model that ensures equal access to this emerging technology. His campaign underscores a broader debate about open versus closed AI models, emphasising the need for transparency and fairness in technological advancements.
Wyoming’s legal framework, however, presents additional hurdles. State officials have indicated that candidates must be real persons and use their full names on the ballot. The issue complicates VIC’s candidacy, as the AI bot cannot meet these requirements. Nevertheless, Miller’s innovative approach has sparked conversations about the future role of AI in governance, with similar initiatives emerging globally.
The Center for Investigative Reporting (CIR), known for producing Mother Jones and Reveal, has sued OpenAI and Microsoft, accusing them of using its content without permission and compensation. The lawsuit, filed in New York federal court, claims that OpenAI’s business model is based on exploiting copyrighted works and argues that AI-generated summaries threaten the financial stability of news organisations by reducing direct engagement with their content.
CIR’s CEO, Monika Bauerlein, emphasised the danger of AI tools replacing direct relationships between readers and news organisations, potentially undermining the foundations of independent journalism. The lawsuit is part of a broader legal challenge faced by OpenAI and Microsoft, with similar suits filed by other media outlets and authors.
Why does it matter?
Some news organisations have opted to collaborate with OpenAI, signing deals to allow the use of their content for AI training in exchange for compensation. Despite OpenAI’s argument that its use of publicly accessible content falls under ‘fair use,’ CIR’s lawsuit highlights the financial and ethical implications of using copyrighted material without proper attribution or payment, warning of significant impacts on investigative journalism and democracy.
EU antitrust regulators scrutinise Microsoft’s partnership with OpenAI and Google’s AI deal with Samsung due to concerns over exclusivity clauses. Competition chief Margrethe Vestager plans to gather more third-party views. This development comes amid global unease about Big Tech’s dominance in new technologies.
After sending questionnaires to tech firms regarding their AI partnerships, Vestager now seeks additional information about Microsoft’s $13 billion investment in OpenAI’s for-profit subsidiary, which would result in a 49% stake, to determine if it harms competitors.
📢 For now, we conclude that @Microsoft has not acquired control of @OpenAI under 🇪🇺 Merger Regulation.
We will keep monitoring the relationships between all key players in the AI sector, incl. Microsoft & OpenAI.
While Microsoft’s deal isn’t subject to EU merger rules, Vestager also investigates if Big Tech is blocking smaller AI developers from accessing users and businesses. Similar concerns apply to Google’s agreement to pre-install its Gemini Nano model on Samsung devices.
Vestager also examines ‘acqui-hires,’ where companies acquire others primarily for their talent, such as Microsoft’s $650-million acquisition of Inflection, to ensure these practices don’t bypass merger control rules and lead to market concentration.
Why does it matter?
Reuters reported in April that the EU regulators were building a case that could lead to an antitrust investigation into Microsoft’s $13 billion investment in OpenAI. Partnerships involving Alphabet, Amazon, and Anthropic are also under scrutiny from antitrust enforcers on both sides of the Atlantic.
OpenAI has launched CriticGPT, a new model based on GPT-4, designed to identify and critique errors in ChatGPT’s outputs. The tool aims to enhance human trainers’ effectiveness by assisting them in providing feedback on the chatbot’s performance.
Similar to ChatGPT’s training process, CriticGPT learns through human feedback, focusing on identifying intentionally inserted errors in ChatGPT’s code outputs. Evaluations showed that CriticGPT’s critiques were preferred over ChatGPT’s in 63% of cases involving naturally occurring bugs, highlighting its ability to minimize irrelevant feedback.
OpenAI plans to further develop CriticGPT’s capabilities, aiming to integrate advanced methods to improve human-generated feedback for GPT-4. The initiative underscores the ongoing role of human oversight in refining AI technologies despite their increasing automation capabilities.
Time magazine has entered a multi-year agreement with OpenAI, granting the AI firm access to its news archives. The deal allows OpenAI’s ChatGPT to cite and link back to Time.com in user queries, although financial details were not disclosed. OpenAI, led by Sam Altman, has forged similar partnerships with prominent media outlets such as the Financial Times, Axel Springer, Le Monde, and Prisa Media.
These collaborations help train and enhance OpenAI’s products while providing media companies access to AI technology for developing new products. Despite some media companies suing OpenAI over content usage, such partnerships are crucial for training AI models and offer a potential revenue stream for news publishers. Such a trend comes amid broader industry tensions, highlighted by Meta’s decision to block news sharing in Canada following new legislation requiring payment for news content.
Why does it matter?
The OpenAI-Time deal is part of a larger movement where publishers seek fair compensation for their content amid the rise of generative AI, which has prompted discussions on ethical content usage and compliance with web standards.
OpenAI CEO Sam Altman has credited Airbnb CEO Brian Chesky for playing a crucial role in OpenAI’s rapid expansion. Speaking at the Aspen Ideas Festival, Altman revealed that Chesky provided invaluable guidance during the company’s growth phase following the success of ChatGPT.
Chesky’s hands-on mentorship involved spending hours each week offering practical advice on tasks and strategic decisions. Altman shared how Chesky’s insights were instrumental in managing the swift user growth of ChatGPT, which became the fastest-growing consumer application in history, reaching over one million users in five days and 100 million monthly active users by January 2023.
Beyond operational advice, Chesky helped Altman with strategic decisions, including hiring and considering the political implications of generative AI technology.
Altman acknowledged that Chesky’s input was vital in shaping OpenAI’s approach and admitted he had not fully considered the political consequences before Chesky’s guidance.
Microsoft has stated it will keep providing eligible customers in Hong Kong with access to OpenAI’s AI models, like ChatGPT, via its Azure cloud platform. The decision stands despite OpenAI’s recent move to restrict API access from unsupported areas, including mainland China and Hong Kong.
OpenAI, with Microsoft as its biggest investor, notified developers in unsupported regions that it would begin blocking API access on 9 July. That step aligns with the US government’s efforts to curb China’s access to advanced AI technology due to national security concerns.
Microsoft’s local branch assured there will be no changes to their Azure OpenAI service offerings in Hong Kong. Although OpenAI’s services are not officially available in mainland China and Hong Kong, users in these regions often circumvent restrictions using virtual private networks or proxies.
Why does this matter?
The restriction by OpenAI aligns with broader US efforts to limit China’s access to advanced technology, reflecting ongoing tensions and strategic competition between the US and China. Microsoft’s decision to maintain services in Hong Kong contrasts with OpenAI’s broader restrictions, potentially pushing Chinese developers toward local AI platforms such as Zhipu AI, Baichuan, and those from major tech companies like Alibaba and Baidu. These local alternatives offer incentives to attract users impacted by OpenAI’s new policies.