Open Rights Group slams LinkedIn for data use in AI without consent

LinkedIn has come under scrutiny for using user data to train AI models without updating its privacy terms in advance. While LinkedIn has since revised its terms, United States users were not informed beforehand, which usually allows them time to make decisions about their accounts. LinkedIn offers an opt-out feature for data used in generative AI, but this was not initially reflected in their privacy policy.

LinkedIn clarified that its AI models, including content creation tools, use user data. Some models on its platform may also be trained by external providers like Microsoft. LinkedIn assures users that privacy-enhancing techniques, such as redacting personal information, are employed during the process.

The Open Rights Group has criticised LinkedIn for not seeking consent from users before collecting data, calling the opt-out method inadequate for protecting privacy rights. Regulatory bodies, including Ireland‘s Data Protection Commission, have been involved in monitoring the situation, especially within regions under GDPR protection, where user data is not used for AI training.

LinkedIn is one of several platforms reusing user-generated content for AI training. Others, like Meta and Stack Overflow, have also begun similar practices, with some users protesting the reuse of their data without explicit consent.

Meta and Spotify criticise EU decisions on AI

Several tech companies, including Meta and Spotify, have criticised the European Union for what they describe as inconsistent decision-making on data privacy and AI. A collective letter from firms, researchers, and industry bodies warned that Europe risks losing competitiveness due to fragmented regulations. They urged data privacy regulators to deliver clear, harmonised decisions, allowing European data to be utilised in AI training for the benefit of the region.

The companies voiced concerns about the unpredictability of recent decisions made under the General Data Protection Regulation (GDPR). Meta, known for owning Facebook and Instagram, recently paused plans to collect European user data for AI development, following pressure from EU privacy authorities. Uncertainty surrounding which data can be used for AI models has become a major issue for businesses.

Tech firms have delayed product releases in Europe, seeking legal clarity. Meta postponed its Twitter-like app Threads, while Google has also delayed the launch of AI tools in the EU market. The introduction of Europe’s AI Act earlier this year added further regulatory requirements, which firms argue complicates innovation.

The European Commission insists that all companies must comply with data privacy rules, and Meta has already faced significant penalties for breaches. The letter stresses the need for swift regulatory decisions to ensure Europe can remain competitive in the AI sector.

Alibaba unveils text-to-video AI technology

Chinese multinational technology company, Alibaba, has intensified its push into the generative AI space by releasing new open-source AI models and text-to-video technology. The Chinese tech giant’s latest models, part of its Qwen 2.5 family, range from 0.5 to 72 billion parameters, covering fields like mathematics, coding, and supporting over 29 languages.

This marks Alibaba’s shift towards a hybrid approach, combining both open-source and proprietary AI developments, as it competes with rivals such as Baidu and OpenAI, which favor closed-source models. The newly introduced text-to-video model, part of the Tongyi Wanxiang family, positions Alibaba as a key player in the rapidly growing AI-driven content creation market.

The company’s new AI offerings aim to serve a wide range of industries, from automotive and gaming to scientific research, solidifying its role in shaping the future of AI across various sectors.

New AI video features launched at YouTube event

During the recent ‘Made on YouTube’ event, several new features were announced, with the highlight being the integration of AI into YouTube Shorts. The company is incorporating Google DeepMind’s AI video generation model, Veo, first introduced at Google’s I/O 2024. This integration will allow creators to generate high-quality backgrounds and six-second video clips in various cinematic styles. Veo is capable of producing 1080p clips and is positioned to compete with similar AI tools from OpenAI, Runway, and others. It is considered an upgrade to YouTube‘s Dream Screen, launched in 2023, and is intended to enhance the content creation process by making it smoother and more dynamic.

The new Veo-powered feature in Dream Screen allows creators to choose from AI-generated images and convert them into short video clips, enabling smoother transitions in content creation. This tool is expected to enhance storytelling on Shorts, for instance, by adding cityscapes or filler scenes to enrich the narrative. The resulting videos will be watermarked using DeepMind’s SynthID technology to indicate that they are AI-produced content.

YouTube has introduced new features to improve user interaction in addition to the Veo update. One of these features is “Jewels,” which allows viewers to send digital items to creators during livestreams, similar to TikTok’s gifting option. The platform has also expanded its automatic dubbing tool to cover more languages and is testing more expressive voice dubbing. Furthermore, YouTube has added AI brainstorming tools for video ideas, AI-generated thumbnails, and AI-assisted comments to help creators engage more effectively with their audiences.

Open-source AI models launched by Alibaba

Alibaba pushes forward with AI innovation, launching a wide range of open-source models and text-to-video technology. The Chinese tech giant’s latest release includes over 100 models from its Qwen 2.5 family, offering significant improvements in mathematics, coding, and multilingual support.

These models aim to enhance AI capabilities in various industries, including gaming, automotive, and scientific research. Alibaba has adopted a unique hybrid approach, combining open-source and proprietary methods, setting itself apart from competitors like OpenAI and Baidu.

With model sizes ranging from 0.5 to 72 billion parameters, Alibaba’s AI tools cater to diverse business needs. The company’s text-to-video technology, part of its Tongyi Wanxiang image generation family, positions it as a key player in the expanding text-to-video market.

As competition in AI technology intensifies globally, Alibaba’s new developments could challenge major players such as OpenAI and ByteDance. ByteDance recently launched a text-to-video app for Chinese users on Apple’s App Store, further highlighting the rising interest in this technology.

Apple Intelligence to add new languages next year

Tech giant Apple announced that it will expand the language support of its generative AI, known as Apple Intelligence, to include German, Italian, Korean, Portuguese, Vietnamese, and more in 2025. This follows the introduction of English versions tailored for India and Singapore. The update will initially arrive with American English in iOS 18.1, expected later this year, with localised English for Australia, Canada, and other regions by the end of 2024.

In 2025, support for languages like Chinese, French, Japanese, and Spanish will also be added. However, Apple faces challenges in major markets, including the European Union, where regulatory hurdles linked to the Digital Markets Act delay its launch on iPhones and iPads. Despite this, the AI feature is already available in the EU through the macOS Sequoia 15.1 developer beta.

China presents even bigger obstacles due to strict local regulations on AI models. Apple is in talks with authorities in both the EU and China to resolve these issues and expand the availability of Apple Intelligence.

Runway partners with Lionsgate to revolutionise film-making

Runway, a generative AI startup, has announced a significant partnership with Lionsgate, the studio responsible for popular franchises such as John Wick and Twilight. This collaboration will enable Lionsgate’s creative teams, including filmmakers and directors, to utilise Runway’s AI video-generating models. These models have been trained on the studio’s film catalogue and will be used to enhance their creative work. Michael Burns, vice chair of Lionsgate, emphasised the potential for this partnership to support creative talent.

Runway is considering new opportunities, including licensing its AI models to individual creators, allowing them to create and train custom models. This partnership represents the first public collaboration between a generative AI startup and a major Hollywood studio. Although Disney and Paramount have reportedly been discussing similar partnerships with AI providers, no official agreements have been reached yet.

This deal comes at a time of increased attention on AI in the entertainment industry, due to California’s new laws that regulate the use of AI digital replicas in film and television. Runway is also currently dealing with legal challenges regarding the alleged use of copyrighted works to train its models without permission.

New California laws safeguard actors from AI exploitation

California Governor Gavin Newsom has signed two new bills into law aimed at protecting actors and performers from unauthorised use of their digital likenesses through AI. The following measures have been introduced in response to the increasing use of AI in the entertainment industry, which has raised concerns about the unauthorised replication of performer’s voices and images. The first bill mandates that contracts unambiguously specify the use of AI-generated digital replicas and requires professional representation for performers during negotiations.

The second bill restricts the commercial use of digital replicas of deceased performers. It prohibits their appearance in films, video games, and other media unless the performer’s estate gives explicit consent. These steps are crucial in safeguarding the rights of performers in a rapidly evolving digital landscape, where AI-generated content is becoming increasingly prevalent.

The legislative actions mentioned highlight widespread concerns about AI technology, not just in entertainment but across different industries. The increasing use of AI has raised worries about its potential to disrupt sectors, lead to job displacement, and even pose a threat to democratic processes. Although President Biden’s administration has advocated for federal AI regulations, Congress is split, which makes it challenging to enact comprehensive national-level legislation.

Tanzania embraces AI to tackle rising cybercrime

Tanzanian President Samia Suluhu Hassan has called for the integration of AI into the strategies of the Tanzania Police Force to address the escalating threat of cybercrime. Speaking at the 2024 Annual Senior Police Officers’ Meeting and the 60th Anniversary of the Tanzania Police Force, President Samia emphasised that in today’s digital age, leveraging advanced technology is crucial for effectively combating online threats. She highlighted the necessity for the police to adapt technologically to stay ahead of sophisticated cybercriminals, underlining the importance of embracing these advancements.

In her address, President Samia also drew attention to a troubling surge in cybercrime, with incidents increasing by 36.1% from 2022 to 2023. She noted that crimes such as fraud, false information dissemination, pornography distribution, and harassment have become more prevalent, with offenders frequently operating from outside Tanzania. The President’s remarks underscore the urgency of adopting advanced technological tools to address these growing challenges effectively and to enhance the police’s capability to counteract such threats.

Furthermore, President Samia emphasised the need to maintain peace and stability during the upcoming local government and general elections. She tasked the police with managing election-related challenges, including defamatory statements and misinformation, without resorting to internet shutdowns. President Samia underscored that while elections are temporary, safeguarding a stable environment is essential for ongoing development and progress by stressing the importance of preserving national peace amidst political activities.

Mistral AI lowers prices and launches free developer features

Mistral AI has launched a new free tier for developers to fine-tune and test apps using its AI models, as well as significantly reducing prices for API access to these models, the startup announced on Tuesday. The Paris-based company, valued at $6 billion, is introducing these updates to remain competitive with industry giants such as OpenAI and Google. These companies also offer free tiers for developers with limited usage. Mistral’s free tier, accessible through its platform ‘la Plateforme,’ enables developers to test its AI models at no cost. However, paid access is required for commercial production.

Mistral has reduced the prices of its AI models, including Mistral NeMo and Codestral, by over 50% and cut the cost of its largest model, Mistral Large, by 33%. This decision reflects the increasing commoditisation of AI models in the developer space, with providers vying to offer more advanced tools at lower prices.

Mistral has integrated image processing into its consumer AI chatbot, le Chat, through its new multimodal model, Pixtral 12B. This model allows users to scan, analyse, and search image files alongside text, marking another advancement in the startup’s expanding AI capabilities.