US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

Google’s Vertex AI will use Mistral AI’s Codestral

Google Cloud announced Wednesday that their AI service (Vertex) will use Mistral AI’s Codestral AI model, as the Google Cloud team explained.

“Today, we’re announcing that Google Cloud is the first hyper scaler to introduce Codestral – Mistral AI’s first open-weight generative AI model explicitly designed for code generation tasks — as a fully managed service.”, the company emphasised.

Mistral AI is a Paris-based startup firm founded in 2023 by former Google Deep Mind and Meta AI scientists. The partnership shows the quick growth of Mistral AI, considered the European alternative to Microsoft-backed OpenAI by many analysts.

Bing previews their generative search in answer to Google’s AI Overviews

Microsoft previewed Bing’s generative search, which is the answer to Google’s AI-powered search experiences. It is currently only available for a small percentage of users. It aggregates information from around the web and generates a summary in response to search queries.

Bing generative search will show information about the search and provide top examples, links, and sources showing where those details came from. As with Google’s similar AI Overviews feature, there’s an option to dismiss AI-generated summaries for traditional search from the same results page.

These AI-generated overview features have already generated concern, especially among publishers, as they threaten to cannibalise traffic to the sites from which they source their information. A study found that AI Overviews could negatively affect about 25% of publisher traffic due to the de-emphasis on article links.

Microsoft insists that it’s ‘maintaining the number of clicks to websites’ and ‘look[ing] closely at how generative search impacts traffic to publishers.’ According to Kyle Wiggers, senior reporter at TechCrunch, the company had no stats to back this commitment, alluding only to ‘early data’ that it’s choosing to keep private for the time being.

China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.

AI system improves breast cancer staging

Researchers at the Paul Scherrer Institute (PSI) and the Massachusetts Institute of Technology (MIT) have developed an AI system to improve the categorisation of breast cancer. The new technology, led by G.V. Shivashankar from PSI and Caroline Uhler from MIT, aims to provide a reliable and cost-effective method for predicting the progression of ductal carcinoma in situ (DCIS) to invasive ductal carcinoma (IDC).

DCIS, a precursor of breast cancer in the milk ducts, accounts for about 25% of breast cancer diagnoses. It can develop into a threatening invasive form in 30 to 50% of cases. The AI system, trained on tissue samples stained with DAPI dye, analyses chromatin images to identify patterns matching those identified by human pathologists. This approach leverages AI’s potential, as highlighted by research in Lancet Digital Health showing AI outperforming radiologists in breast cancer detection.

The researchers believe this AI-based tumour classification method has significant potential, though further studies are necessary to ensure its reliability and safety. The US Department of Defense (DoD) has been using AI to detect cancer since 2020, showcasing the growing role of AI in medical diagnostics. The new system developed by PSI and MIT could lead to more accurate predictions and better treatment decisions for patients.

Meta’s AI bots aim to support content creators

Meta CEO Mark Zuckerberg has proposed a vision where AI bots assist content creators with audience engagement, aiming to free up their time for more crucial tasks. In an interview with internet personality Rowan Cheung, Zuckerberg discussed how these AI bots could capture the personalities and business objectives of creators, allowing fans to interact with them as if they were the creators themselves.

Zuckerberg’s optimism aligns with many in the tech industry who believe AI can significantly enhance the impact of individuals and organizations. However, there are concerns about whether creators, whose audiences value authenticity, will embrace generative AI. Meta’s initial rollout of AI-powered bots earlier this year faced issues, including bots making false claims and providing misleading information, raising questions about the technology’s reliability.

Meta claims improvements with its latest AI model, Llama 3.1, but challenges such as hallucinations and planning errors persist. Zuckerberg acknowledges the need to address these concerns and build trust with users. Despite these hurdles, Meta continues to focus on integrating AI into its platforms while also pursuing its Metaverse ambitions and competing in the tech space.

Meta’s plans to introduce generative AI to its apps dating back to 2023, along with its increased focus on AI amid Metaverse ambitions highlight the company’s broader strategic vision. However, convincing creators to rely on AI bots for fan interaction remains a significant challenge.

Samsung to relaunch virtual assistant Bixby with advanced AI capabilities

Samsung is relaunching Bixby, its virtual assistant initially introduced in 2017. The new version will feature advanced AI capabilities, enhancing user interactions with generative AI powered by Samsung’s proprietary large language model. The announcement was confirmed by TM Roh, head of Samsung’s mobile division, who emphasised the improvements aimed at making Bixby a more natural conversational interface.

Samsung recently unveiled the Galaxy Z Fold 6 and Galaxy Z Flip 6, highlighting new AI tools designed to enhance the user experience. However, the upgraded Bixby was not mentioned during the latest Galaxy Unpacked event. Speculations suggest that the new Bixby may debut with the Galaxy S25 series early next year, though no specific timeline has been provided.

Reintroducing Bixby into a market dominated by Google Assistant, Amazon Alexa and Apple’s Siri is a significant challenge. Apple has also announced AI enhancements for Siri, intensifying the competition. Samsung aims to differentiate Bixby by seamlessly integrating it with its extensive product ecosystem, providing a unique user experience.

The timeline for Bixby’s AI upgrade remains unclear. Whether Samsung’s virtual assistant will reclaim its position among the top contenders in the market is uncertain. However, Samsung is determined to make a strong impact with its revitalised Bixby.

OpenAI announces major reorganisation to bolster AI safety measures

OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.

OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.

Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.

The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.

Tesla introduces humanoid robots, Musk confirmed

Elon Musk has revealed that Tesla will start using humanoid robots next year. These robots will initially serve Tesla internally, with plans to begin sales by 2026. However, announcement aligns with Musk’s broader strategy to cut costs amid decreasing demand for Tesla vehicles.

Tesla’s recent financial update reported a significant drop in profits for the second quarter, from $2.7bn to less than $1.5bn. Despite various price cuts, automotive revenue decreased by 7% year-on-year, though a rise in the energy storage business led to a modest 2% increase in overall revenue. Consequently, Tesla’s shares fell by almost 8% in after-hours trading.

Musk has been increasingly focusing on advanced technologies such as AI and autonomous driving. He announced that the Optimus robot would be ready for internal use at Tesla by the end of this year, with mass production expected by 2026. Optimus is designed to perform tasks that are unsafe, repetitive, or boring for humans.

Mr Musk’s ambitious timelines have often been missed, including previous predictions about self-driving taxis. Tesla remains committed to developing robo-taxis, but their launch depends on regulatory approval. Other companies like Honda and Boston Dynamics are also developing humanoid robots, highlighting the competitive nature of this emerging field.

Lakera secures $20M for AI protection, Gandalf helps track threats

Leaders of Fortune 500 companies developing AI applications face a potential nightmare: hackers tricking AI into revealing sensitive data. Zurich-based startup Lakera has raised $20 million to address this issue. The funding round, led by Atomico with participation from Citi Ventures and Dropbox Ventures, brings Lakera’s total funding to $30 million. Lakera’s platform, used by companies like Dropbox and Citi, allows businesses to set guardrails for generative AI, protecting against prompt injection attacks.

Lakera CEO David Haber highlighted the importance of safety and security as companies integrate generative AI into critical functions. Existing security teams encounter new challenges in securing these applications. Lakera’s platform, built on internal AI models, ensures that generative AI applications do not take unintended actions. Customers can specify the context and policies for AI responses, preventing the disclosure of sensitive information.

A unique advantage for Lakera is Gandalf, an online AI security game used by millions, including Microsoft. The game generates a real-time database of AI threats, keeping Lakera’s software updated with thousands of new attacks daily. That helps in maintaining robust security measures for their clients.

Lakera competes in the generative AI security landscape with startups like HackerOne and BugCrowd. Matt Carbonara of Citi Ventures praised Lakera’s focus on prompt injection attacks and its team’s capability to build the necessary countermeasures for new attack surfaces.