Tech giant Apple announced that it will expand the language support of its generative AI, known as Apple Intelligence, to include German, Italian, Korean, Portuguese, Vietnamese, and more in 2025. This follows the introduction of English versions tailored for India and Singapore. The update will initially arrive with American English in iOS 18.1, expected later this year, with localised English for Australia, Canada, and other regions by the end of 2024.
In 2025, support for languages like Chinese, French, Japanese, and Spanish will also be added. However, Apple faces challenges in major markets, including the European Union, where regulatory hurdles linked to the Digital Markets Act delay its launch on iPhones and iPads. Despite this, the AI feature is already available in the EU through the macOS Sequoia 15.1 developer beta.
China presents even bigger obstacles due to strict local regulations on AI models. Apple is in talks with authorities in both the EU and China to resolve these issues and expand the availability of Apple Intelligence.
Runway, a generative AI startup, has announced a significant partnership with Lionsgate, the studio responsible for popular franchises such as John Wick and Twilight. This collaboration will enable Lionsgate’s creative teams, including filmmakers and directors, to utilise Runway’s AI video-generating models. These models have been trained on the studio’s film catalogue and will be used to enhance their creative work. Michael Burns, vice chair of Lionsgate, emphasised the potential for this partnership to support creative talent.
Runway is considering new opportunities, including licensing its AI models to individual creators, allowing them to create and train custom models. This partnership represents the first public collaboration between a generative AI startup and a major Hollywood studio. Although Disney and Paramount have reportedly been discussing similar partnerships with AI providers, no official agreements have been reached yet.
This deal comes at a time of increased attention on AI in the entertainment industry, due to California’s new laws that regulate the use of AI digital replicas in film and television. Runway is also currently dealing with legal challenges regarding the alleged use of copyrighted works to train its models without permission.
California Governor Gavin Newsom has signed two new bills into law aimed at protecting actors and performers from unauthorised use of their digital likenesses through AI. The following measures have been introduced in response to the increasing use of AI in the entertainment industry, which has raised concerns about the unauthorised replication of performer’s voices and images. The first bill mandates that contracts unambiguously specify the use of AI-generated digital replicas and requires professional representation for performers during negotiations.
The second bill restricts the commercial use of digital replicas of deceased performers. It prohibits their appearance in films, video games, and other media unless the performer’s estate gives explicit consent. These steps are crucial in safeguarding the rights of performers in a rapidly evolving digital landscape, where AI-generated content is becoming increasingly prevalent.
The legislative actions mentioned highlight widespread concerns about AI technology, not just in entertainment but across different industries. The increasing use of AI has raised worries about its potential to disrupt sectors, lead to job displacement, and even pose a threat to democratic processes. Although President Biden’s administration has advocated for federal AI regulations, Congress is split, which makes it challenging to enact comprehensive national-level legislation.
Tanzanian President Samia Suluhu Hassan has called for the integration of AI into the strategies of the Tanzania Police Force to address the escalating threat of cybercrime. Speaking at the 2024 Annual Senior Police Officers’ Meeting and the 60th Anniversary of the Tanzania Police Force, President Samia emphasised that in today’s digital age, leveraging advanced technology is crucial for effectively combating online threats. She highlighted the necessity for the police to adapt technologically to stay ahead of sophisticated cybercriminals, underlining the importance of embracing these advancements.
In her address, President Samia also drew attention to a troubling surge in cybercrime, with incidents increasing by 36.1% from 2022 to 2023. She noted that crimes such as fraud, false information dissemination, pornography distribution, and harassment have become more prevalent, with offenders frequently operating from outside Tanzania. The President’s remarks underscore the urgency of adopting advanced technological tools to address these growing challenges effectively and to enhance the police’s capability to counteract such threats.
Furthermore, President Samia emphasised the need to maintain peace and stability during the upcoming local government and general elections. She tasked the police with managing election-related challenges, including defamatory statements and misinformation, without resorting to internet shutdowns. President Samia underscored that while elections are temporary, safeguarding a stable environment is essential for ongoing development and progress by stressing the importance of preserving national peace amidst political activities.
Mistral AI has launched a new free tier for developers to fine-tune and test apps using its AI models, as well as significantly reducing prices for API access to these models, the startup announced on Tuesday. The Paris-based company, valued at $6 billion, is introducing these updates to remain competitive with industry giants such as OpenAI and Google. These companies also offer free tiers for developers with limited usage. Mistral’s free tier, accessible through its platform ‘la Plateforme,’ enables developers to test its AI models at no cost. However, paid access is required for commercial production.
Mistral has reduced the prices of its AI models, including Mistral NeMo and Codestral, by over 50% and cut the cost of its largest model, Mistral Large, by 33%. This decision reflects the increasing commoditisation of AI models in the developer space, with providers vying to offer more advanced tools at lower prices.
Mistral has integrated image processing into its consumer AI chatbot, le Chat, through its new multimodal model, Pixtral 12B. This model allows users to scan, analyse, and search image files alongside text, marking another advancement in the startup’s expanding AI capabilities.
The United States is set to host a global AI safety summit in November, focusing on international cooperation for AI safety. The summit will take place in San Francisco on 20-21 November, with Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken overseeing the event. The gathering will include representatives from multiple countries, such as Australia, Canada, Japan, and the European Union, all part of the International Network of AI Safety Institutes.
The summit’s primary objective is to promote collaboration in ensuring the safe and secure development of AI technologies. Generative AI, which can generate text, images, and videos, has raised concerns over potential job loss, electoral manipulation, and broader risks to society. Addressing these issues, the summit will bring together technical experts to share knowledge and develop strategies for global AI safety.
Raimondo first introduced the idea of the International Network of AI Safety Institutes at the AI Seoul Summit in May, where countries agreed to prioritise safety and innovation in AI development. The upcoming event in US will mark the first formal gathering of this group, ahead of the larger AI Action Summit scheduled for Paris in February 2024.
The Biden administration has already made strides in AI regulation, with President Biden signing an executive order last year. The order requires developers of AI systems posing national security or public health risks to submit safety test results before releasing their products to the public.
At its annual Snap Partner Summit, Snapchat announced new AI-powered features to improve the user experience. The app’s My AI chatbot now functions similarly to Google Lens. It enables users to take pictures of menus in foreign languages for translations, identify plants, or understand parking signs using AI. These updates aim to make My AI more practical, moving beyond entertainment to become a helpful tool for users.
Snapchat is introducing AI-powered edits for Snapchat+ subscribers through the ‘My Selfie’ feature. This feature allows users to enhance saved Snaps with captions and creative lenses. For example, users can transform a selfie into a Renaissance painting. Additionally, users can choose to be featured in AI-generated images with friends, such as being portrayed as lawyers or athletes.
Snapchat is also introducing a new AI-powered lens that displays users’ possible future appearances in response to TikTok’s trendy old-age filter. Other updates include enhanced HD video calls, SnapMail for leaving messages when friends miss a call, and local time zone displays in chats to improve worldwide user connections.
Slack is undergoing a major transformation as it integrates AI features into its platform, aiming to evolve from a simple messaging service to a ‘work operating system.’ CEO Denise Dresser said Slack will now serve as a hub for AI applications from companies like Salesforce, Adobe, and Anthropic. New, pricier features include AI-generated summaries of conversations and the ability to interact with AI agents for tasks such as data analysis, web searches, and image generation.
This shift follows Salesforce’s 2021 acquisition of Slack and its broader move toward AI-driven solutions. Slack’s AI integration seeks to enhance productivity by offering tools to catch up on team discussions, analyse business data, and create branded content, all within the chat environment. However, questions remain about whether users will embrace and pay for these premium features and how this change aligns with Slack’s core identity as a workplace communication tool.
Concerns around data privacy have also surfaced as Slack leans further into AI. The company faced criticism earlier this year for handling customer data, which was used for training purposes, but maintains that it does not use user messages to train its AI models. As Slack continues integrating AI, it must address growing scepticism around managing and safeguarding data.
Google is planning to roll out new features that will enable the identification of images that have been generated or edited using AI in search results. This update will highlight such photos in the ‘About this image’ section across Google Search, Google Lens, and the Circle to Search feature on Android. In the future, this disclosure feature may also be extended to other Google platforms like YouTube.
To achieve this, Google will utilise C2PA metadata developed by the Coalition for Content Provenance and Authenticity. This metadata tracks an image’s history, including its creation and editing process. However, the adoption of C2PA standards is limited, and metadata can be altered or removed, which may impact the reliability of this identification method.
Despite the challenges, Google’s action addresses the increasing concerns about deepfakes and AI-generated content. There have been reports of a significant rise in scams involving such content, and losses related to deepfakes are expected to increase dramatically in the coming years. As public concern about deepfakes and AI-driven misinformation grows, Google’s initiative aims to provide more transparency in digital media.
The UAE’s AI ambitions take a leap forward with two new centres in Abu Dhabi, led by Microsoft and G42. The facilities will focus on developing responsible AI technologies and promoting best practices across the academic and private sectors.
One of the centres will address underrepresented languages in AI by creating large language models, while the other brings experts together to explore responsible AI usage. Both centres build on Microsoft’s recent $1.5 billion investment in G42.
Competition in AI is growing in the region, with Qatar and Saudi Arabia also seeking to emerge as key hubs. However, G42’s decision to divest from China ensures that the partnership aligns with US and UAEgovernment security concerns.
By opening these new centres, the UAE hopes to bolster its position as a global AI leader, demonstrating its shift away from reliance on oil toward innovative technology development.