Mistral AI lowers prices and launches free developer features

Mistral AI has launched a new free tier for developers to fine-tune and test apps using its AI models, as well as significantly reducing prices for API access to these models, the startup announced on Tuesday. The Paris-based company, valued at $6 billion, is introducing these updates to remain competitive with industry giants such as OpenAI and Google. These companies also offer free tiers for developers with limited usage. Mistral’s free tier, accessible through its platform ‘la Plateforme,’ enables developers to test its AI models at no cost. However, paid access is required for commercial production.

Mistral has reduced the prices of its AI models, including Mistral NeMo and Codestral, by over 50% and cut the cost of its largest model, Mistral Large, by 33%. This decision reflects the increasing commoditisation of AI models in the developer space, with providers vying to offer more advanced tools at lower prices.

Mistral has integrated image processing into its consumer AI chatbot, le Chat, through its new multimodal model, Pixtral 12B. This model allows users to scan, analyse, and search image files alongside text, marking another advancement in the startup’s expanding AI capabilities.

US to host global AI safety summit in November

The United States is set to host a global AI safety summit in November, focusing on international cooperation for AI safety. The summit will take place in San Francisco on 20-21 November, with Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken overseeing the event. The gathering will include representatives from multiple countries, such as Australia, Canada, Japan, and the European Union, all part of the International Network of AI Safety Institutes.

The summit’s primary objective is to promote collaboration in ensuring the safe and secure development of AI technologies. Generative AI, which can generate text, images, and videos, has raised concerns over potential job loss, electoral manipulation, and broader risks to society. Addressing these issues, the summit will bring together technical experts to share knowledge and develop strategies for global AI safety.

Raimondo first introduced the idea of the International Network of AI Safety Institutes at the AI Seoul Summit in May, where countries agreed to prioritise safety and innovation in AI development. The upcoming event in US will mark the first formal gathering of this group, ahead of the larger AI Action Summit scheduled for Paris in February 2024.

The Biden administration has already made strides in AI regulation, with President Biden signing an executive order last year. The order requires developers of AI systems posing national security or public health risks to submit safety test results before releasing their products to the public.

New AI tools and lenses coming to Snapchat

At its annual Snap Partner Summit, Snapchat announced new AI-powered features to improve the user experience. The app’s My AI chatbot now functions similarly to Google Lens. It enables users to take pictures of menus in foreign languages for translations, identify plants, or understand parking signs using AI. These updates aim to make My AI more practical, moving beyond entertainment to become a helpful tool for users.

Snapchat is introducing AI-powered edits for Snapchat+ subscribers through the ‘My Selfie’ feature. This feature allows users to enhance saved Snaps with captions and creative lenses. For example, users can transform a selfie into a Renaissance painting. Additionally, users can choose to be featured in AI-generated images with friends, such as being portrayed as lawyers or athletes.

Snapchat is also introducing a new AI-powered lens that displays users’ possible future appearances in response to TikTok’s trendy old-age filter. Other updates include enhanced HD video calls, SnapMail for leaving messages when friends miss a call, and local time zone displays in chats to improve worldwide user connections.

Slack to transform into AI-powered work operating system

Slack is undergoing a major transformation as it integrates AI features into its platform, aiming to evolve from a simple messaging service to a ‘work operating system.’ CEO Denise Dresser said Slack will now serve as a hub for AI applications from companies like Salesforce, Adobe, and Anthropic. New, pricier features include AI-generated summaries of conversations and the ability to interact with AI agents for tasks such as data analysis, web searches, and image generation.

This shift follows Salesforce’s 2021 acquisition of Slack and its broader move toward AI-driven solutions. Slack’s AI integration seeks to enhance productivity by offering tools to catch up on team discussions, analyse business data, and create branded content, all within the chat environment. However, questions remain about whether users will embrace and pay for these premium features and how this change aligns with Slack’s core identity as a workplace communication tool.

Concerns around data privacy have also surfaced as Slack leans further into AI. The company faced criticism earlier this year for handling customer data, which was used for training purposes, but maintains that it does not use user messages to train its AI models. As Slack continues integrating AI, it must address growing scepticism around managing and safeguarding data.

New Google update will identify AI-edited images

Google is planning to roll out new features that will enable the identification of images that have been generated or edited using AI in search results. This update will highlight such photos in the ‘About this image’ section across Google Search, Google Lens, and the Circle to Search feature on Android. In the future, this disclosure feature may also be extended to other Google platforms like YouTube.

To achieve this, Google will utilise C2PA metadata developed by the Coalition for Content Provenance and Authenticity. This metadata tracks an image’s history, including its creation and editing process. However, the adoption of C2PA standards is limited, and metadata can be altered or removed, which may impact the reliability of this identification method.

Despite the challenges, Google’s action addresses the increasing concerns about deepfakes and AI-generated content. There have been reports of a significant rise in scams involving such content, and losses related to deepfakes are expected to increase dramatically in the coming years. As public concern about deepfakes and AI-driven misinformation grows, Google’s initiative aims to provide more transparency in digital media.

Microsoft and G42 establish AI hubs in Abu Dhabi

The UAE’s AI ambitions take a leap forward with two new centres in Abu Dhabi, led by Microsoft and G42. The facilities will focus on developing responsible AI technologies and promoting best practices across the academic and private sectors.

One of the centres will address underrepresented languages in AI by creating large language models, while the other brings experts together to explore responsible AI usage. Both centres build on Microsoft’s recent $1.5 billion investment in G42.

Competition in AI is growing in the region, with Qatar and Saudi Arabia also seeking to emerge as key hubs. However, G42’s decision to divest from China ensures that the partnership aligns with US and UAE government security concerns.

By opening these new centres, the UAE hopes to bolster its position as a global AI leader, demonstrating its shift away from reliance on oil toward innovative technology development.

AI-powered fact-checking tech in development by NEC

The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.

The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.

As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.

The FCC proposes new rules for AI-generated calls and texts

The US Federal Communications Commission (FCC) has introduced new proposals to regulate AI-generated communications in telecommunications. That initiative, detailed in a Notice of Proposed Rulemaking (NPRM) and a Notice of Inquiry (NOI) released in August, seeks to define and manage the use of AI in outbound calls and text messages.

The NPRM proposes defining an ‘AI-generated call’ as one utilising AI technologies—such as machine learning algorithms or predictive models—to produce artificial or prerecorded voice or text content. The rules would require callers to disclose AI use and obtain specific consent from consumers, ensuring greater transparency and control over AI-generated communications.

In addition to defining and regulating AI-generated calls, the NPRM includes provisions to address the needs of individuals with speech or hearing disabilities. It proposes an exemption from certain TCPA requirements for AI-generated calls made by these individuals, provided such calls are not for telemarketing or advertising. That exemption aims to facilitate communication for those who depend on AI technologies for telephone interactions, balancing regulatory requirements with accessibility needs.

The NOI, on the other hand, seeks feedback on technologies designed to detect, alert, and block potentially fraudulent or AI-generated calls, exploring their development and privacy implications. It questions how these technologies handle call content data and whether current privacy laws are adequate.

The FCC also invites comments on the potential costs and benefits of the proposed rules and asserts that its authority to implement them is grounded in the Telephone Consumer Protection Act (TCPA). As the comment deadlines approach, the FCC anticipates a thorough discussion on these regulatory changes, which could significantly impact how AI technologies are managed in telecommunications.

BlackRock and Microsoft plan $30 billion AI infrastructure investment

BlackRock and Microsoft have announced plans to create a significant investment fund of over $30 billion to develop infrastructure for AI. The fund-Global AI Infrastructure Investment Partnership will focus on building data centres and energy projects to support the growing computational demands of AI technologies. As AI models, particularly those involved in deep learning and large-scale data processing, require immense processing power, these investments are critical to meet the rising energy and infrastructure needs.

The surge in demand for AI has driven tech companies to link thousands of chips together in large clusters to process massive amounts of data, fueling the necessity for specialised data centres. BlackRock and Microsoft’s partnership aims to strengthen AI supply chains and improve energy sourcing to support these advancements. Abu Dhabi-backed investment company MGX will also join as a general partner in the venture, while AI chip leader Nvidia will provide its technical expertise to guide the initiative.

The partnership can mobilise up to $100 billion in investment when debt financing is included. Most of this investment will be in the US, with the rest targeted in partner countries. This ambitious collaboration means a rapidly expanding need for AI infrastructure and the commitment from major global players to fuel its growth.

GSMA to launch responsible AI roadmap

GSMA has launched its inaugural Responsible AI (RAI) Maturity Roadmap, marking a significant step toward ethical AI practices across the telecom sector. That initiative represents the first sector-wide effort to unify approaches to responsible AI use, providing telecom operators with a structured framework to assess their current AI maturity and set clear goals for future improvement.

The roadmap integrates global standards and regulations from organisations such as the OECD and UNESCO, ensuring its guidelines are comprehensive and internationally recognised. This alignment supports the creation of a robust framework that promotes ethical AI practices throughout the industry.

GSMA and industry leaders emphasise the substantial economic potential of AI, with projections suggesting up to $680 billion in opportunities for the telecom sector over the next 15-20 years. The roadmap focuses on five core dimensions—vision and strategic goals, AI governance, technical controls, third-party collaboration, and change management—providing a comprehensive approach to responsible AI. That includes best practices such as fairness, privacy, safety, transparency, accountability, and environmental impact.

Why does this matter?

Statements from GSMA Director General Mats Granryd and Telefónica Chairman José María Álvarez-Pallete López highlight the need for ethical guidelines to manage AI’s rapid development and set a precedent for other industries to follow in adopting responsible AI practices.