OpenAI announces its Mac app is now accessible to all users

OpenAI has announced that the ChatGPT app is now available to all macOS users. This update, shared via OpenAI’s official X account, extends access beyond the initial rollout to Plus subscribers.

After downloading the app, you can simply open the service by pressing the Option + Space, similar to Apple’s current Command + Space function for Spotlight Search. Clearly, this new app is ‘designed to integrate seamlessly’ with your Mac experience.

First introduced in May, the app’s announcement was somewhat overshadowed by the release of the chatbot’s newest version, GPT-4o. At the time it was reserved exclusively to users paying for the OpenAI Plus subscription, but now, any user running macOS 14.0 Sonoma or later can use the chatbot for various tasks. Making the app more accessible and integrated is in line with Apple’s vision for its partnership with OpenAI.

The release of the app is a first test in Apple’s strategy to incorporate external AI tools into its devices. A ChatGPT app already exists for the iPhone. However, at WWDC, it was revealed that OpenAI’s technologies would also be integrated on iPhones and iPads. Soon, users will be able to use ChatGPT through Siri and other AI-powered tools with Apple’s upcoming operating systems.

AI protections included in new Hollywood worker’s contracts

The International Alliance of Theatrical Stage Employees (IATSE) has reached a tentative three-year agreement with major Hollywood studios, including Disney and Netflix. The deal promises significant pay hikes and protections against the misuse of AI, addressing key concerns of the workforce.

Under the terms of the agreement, IATSE members, such as lighting technicians and costume designers, will receive pay raises of 7%, 4%, and 3.5% over the three-year period. These increases mark a substantial improvement in compensation for the crew members who are vital to film and television production.

A crucial element of the deal is the inclusion of language that prevents employees from being required to provide AI prompts if it could result in job displacement. The provision aims to safeguard jobs against the potential threats posed by AI technologies in the industry.

The new agreement comes on the heels of a similar labor deal reached in late 2023 between the SAG-AFTRA actors’ union and the studios. That contract, which ended a nearly six-month production halt, provided substantial pay raises, streaming bonuses, and AI protections, amounting to over $1 billion in benefits over three years.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

Toys ‘R’ Us pioneers text to video AI-generated advertising clip

Company founder Charles Lazarus showcased Toys ‘R’ Us AI-generated short promotional clip at the 2024 Cannes Lions Festival in France. In an effort to regenerate interest in the brand, Lazarus collaborated with the creative agency Native Foreign using OpenAI’s Sora. Except for a few tweaks that require human input the 66-seconds video was created entirely by Sora. 

Sora is a diffusion model capable of three modes of video creation. In addition to using text prompts to create videos, it is capable of extending, modifying and filling in missing frames in an existing video. For instance, it can take a video with static noise and gradually transform it by removing the noise over many steps. The AI model can also animate elements in an image.

Company CMO, Kim Miller, informed festival attendees that despite the film’s AI base, production required human involvement throughout the process, including a dependence on intuition. The film received mixed reviews on social media, with some even calling it creepy.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

Qatar’s tech hub expansion accelerates as AI boosts its growth

Qatar is rapidly advancing in technology, positioning itself as a global tech hub. The President of Qatar Science and Technology Park (QSTP), Dr Jack Lau, highlighted the role of AI in boosting the Qatari and GCC markets, emphasising the need for region-specific, tailored solutions.

AI applications like ChatGPT are well-researched in Qatar. However, optimization for different languages, increased speed, and more accurate responses have yet to be implemented.

Dr Lau noted the satisfaction with emerging AI tools, particularly in translating and customising presentation content. He stressed the importance of cultural sensitivity and corporate-specific needs in AI applications while ensuring data privacy and security, underscoring also that these technologies have significant potential for perfection and further development.

QSTP is crucial in supporting Qatar’s national vision of talent transformation through education, innovation, and entrepreneurship. The organisation is exploring opportunities for individuals with the right educational background to contribute significantly to AI, robotics, medical sciences, and sustainable farming.

Malaysia seeks Chinese investment for data centres

Malaysia is in talks with potential Chinese investors about building data centres as part of its strategy to attract high-quality investments, according to Economy Minister Rafizi Ramli. Rafizi stated in an interview that the government aims to enhance its infrastructure to leverage the AI boom.

In the past year, major US tech companies such as Microsoft, Google, and Nvidia have announced plans to establish data centres in Malaysia. Chinese firms are also interested in developing additional facilities to support local tech companies seeking to expand into the Southeast Asian market, Rafizi noted.

Rafizi emphasised Malaysia’s goal to expedite its transition from the back end to the front end of the semiconductor industry, focusing on integrated circuit design and data centres. He clarified that discussions with Chinese companies have not included using their Malaysian operations to bypass US tariffs.

OpenAI delays ChatGPT voice features amidst safety concerns and legal threats

OpenAI has announced a delay in launching new voice and emotion-reading features for its ChatGPT chatbot, citing the need for more safety testing. Originally set to be available to some paying subscribers in late June, these features will be rolled out in the fall.

The postponement follows a demonstration last month that garnered user excitement and sparked controversy, including a potential lawsuit from actress Scarlett Johansson, who claimed her voice was mimicked for an AI persona.

OpenAI’s demo showcased the chatbot’s ability to speak in synthetic voices and respond to users’ tones and expressions, with one voice resembling Johansson’s role in the movie ‘Her.’ However, CEO Sam Altman denied using Johansson’s voice, clarifying that a different actor was used for training. The company aims to ensure the new features meet high safety and reliability standards before release.

The delay highlights ongoing challenges in the AI industry. Companies like Google and Microsoft have faced similar setbacks, dealing with errors and controversial outputs from their AI tools.

OpenAI emphasised the complexity of designing chatbots that interpret and mimic emotions, which can introduce new risks and potential for misuse. Additionally, the competition in AI industry is growing swiftly to satisfy the demand of a more and more demanding market and customer field. However, the company seems to be committed to releasing these advanced features thoughtfully and safely.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Experts join Regulating AI’s new advisory board

Regulating AI, a non-profit organisation dedicated to promoting AI governance, has announced its advisory board’s formation. Board members include notable figures such as former US Senator Cory Gardner, former Bolivian President Jorge Quiroga, and former Finnish Prime Minister Esko Aho. The board aims to foster a sustainable AI ecosystem that benefits humanity while addressing potential risks and ethical concerns.

The founder of Regulating AI, Sanjay Puri, expressed his excitement about the diverse expertise and perspectives the new board members bring. He emphasised the importance of their wisdom in navigating the complexities of the rapidly evolving AI landscape and shaping policies that balance innovation with ethical considerations and societal well-being.

One of the organisation’s key initiatives is developing a comprehensive AI governance framework. That includes promoting international cooperation, advocating for diverse voices, and exploring sector-specific AI implications. Former President of Bolivia Jorge Quiroga highlighted the transformational power of AI and the need for effective regulation that considers the unique challenges of developing nations.

Regulating AI aims to build public trust, align international standards, and empower various stakeholders through its board. Former US Senator Gardner underscored the necessity of robust regulatory frameworks to ensure AI is developed and deployed responsibly, protecting consumer privacy, preventing algorithmic bias, and upholding democratic values. The organisation also seeks to educate and raise awareness about AI regulations, fostering discussions among experts and policymakers to advance understanding and implementation.