California passes new law regulating AI in healthcare

California Governor Gavin Newsom has signed Assembly Bill 3030 (AB 3030) into law, which will regulate the use of generative AI (GenAI) in healthcare. Effective 1 January 2025, the law mandates that any AI-generated communications related to patient care must include a clear disclaimer informing patients of its AI origin. It also instructs patients to contact human healthcare providers for further clarification.

The bill is part of a larger effort to ensure patient transparency and mitigate risks linked to AI in healthcare, especially as AI tools become increasingly integrated into clinical environments. However, AI-generated communications that have been reviewed by licensed healthcare professionals are exempt from these disclosure requirements. The law focuses on clinical communications and does not apply to non-clinical matters like appointment scheduling or billing.

AB 3030 also introduces accountability for healthcare providers who fail to comply, with physicians facing oversight from the Medical Board of California. The law aims to balance AI’s potential benefits, such as reducing administrative burdens, with the risks of inaccuracies or biases in AI-generated content. California’s move is part of broader efforts to regulate AI in healthcare, aligning with initiatives like the federal AI Bill of Rights.

As the law takes effect, healthcare providers in California will need to adapt to these new rules, ensuring that AI-generated content is flagged appropriately while maintaining the quality of patient care.

Hollywood embraces AI with Promise studio launch

A new studio, Promise, has been launched to revolutionise filmmaking with the use of generative AI. Backed by venture capital firm Andreessen Horowitz and former News Corp President Peter Chernin, the startup is setting its sights on blending AI with Hollywood storytelling. The announcement coincided with the conclusion of its fundraising round.

Founded by Fullscreen’s CEO George Strompolos, ex-YouTube executive Jamie Byrne, and AI artist Dave Clark, the studio aims to harness the GenAI boom to streamline and enhance content creation. Promise is collaborating with Hollywood stakeholders to develop a multi-year slate of films and series, combining creative expertise with cutting-edge technology.

The company is also developing an AI-driven software tool named Muse, designed to assist artists throughout the production process. Muse aims to integrate generative AI at every stage, offering a streamlined approach to creating movies and shows. Promise hopes to position itself as a leader in the evolving landscape of AI-powered media.

Generative AI has gained traction in Hollywood, with tools like OpenAI’s Sora and Adobe’s video-generation model prompting industry interest. These innovations have spurred discussions about potential collaborations to reduce costs and speed up production. Promise’s launch adds to this momentum, marking a step forward in AI-driven entertainment.

Global south needs better AI access, says Xi

At the G20 Summit in Rio de Janeiro, Chinese President Xi Jinping warned against allowing AI to become the exclusive domain of wealthy nations. Speaking at the global forum, Xi called for stronger international governance and cooperation to ensure equitable access to AI technologies.

Xi highlighted China’s commitment to supporting developing countries, unveiling a joint initiative with G20 partners to improve access to scientific and technological innovations in the Global South. The Chinese leader also cautioned against protectionist policies, such as tariffs on Chinese goods, which he argued undermine global trade and the transition to green economies.

The remarks come as Xi tours Latin America, echoing similar criticisms of economic barriers he raised at the APEC forum in Peru. His appeal for openness and collaboration underscores China’s broader efforts to position itself as a champion of equitable global development.

New startup tackles AI energy demands with analog tech

With AI adoption surging, data centers are bracing for a 160% jump in electricity consumption by 2030, driven by the energy demands of GPUs. Sagence AI, a startup led by Vishal Sarin, is addressing this challenge by developing analog chips that promise greater energy efficiency without sacrificing performance.

Unlike traditional digital chips, Sagence’s analog designs minimise memory bottlenecks and offer higher data density, making them a viable option for specialised AI applications in servers and mobile devices. While analog chips pose challenges in precision and programming, Sagence aims to complement, not replace, digital solutions, delivering cost-effective and eco-friendly alternatives.

Backed by $58M in funding from investors like TDK Ventures and New Science Ventures, Sagence plans to launch its chips in 2025. As it scales operations, the startup faces stiff competition from industry giants and will need to prove its technology can outperform established systems while maintaining lower energy consumption.

Perplexity launches shopping hub to compete with Google

Perplexity, an AI-driven search startup, has unveiled a new shopping hub to attract users and compete with Google’s dominance in search. Backed by Amazon founder Jeff Bezos and Nvidia, the platform offers visually rich product cards in response to shopping-related queries, integrating with platforms like Shopify to provide real-time product details.

The rollout includes features like ‘Snap to Shop,’ which uses photos to suggest products and a Merchant Program that allows retailers to share their offerings with Perplexity. Initially available in the US, the service will expand to other markets at a later date.

This move comes as Perplexity raises new investments at a reported $9 billion valuation and seeks to compete with OpenAI, which recently introduced enhanced search features for ChatGPT. The startup aims to leverage AI-powered tools to boost its presence in e-commerce and attract both users and merchants.

Malaysia explores AI for faster accident detection

Malaysia is considering adopting an AI-driven system to improve road safety. The Automatic Road Incident Detection System (ARIDS), developed by a Universiti Putra Malaysia (UPM) team, uses neural networks to identify accidents and traffic anomalies in real time. Currently in pilot testing across 1,000km of expressways and roads, ARIDS has shown potential to reduce emergency response times significantly.

ARIDS, launched in February, has already been implemented in Brunei and parts of Xi’an, China. The Malaysian Highway Authority (LLM) is assessing its viability for nationwide implementation. A recent crash in Johor, detected by ARIDS 23 minutes before an official report was made, highlighted the system’s ability to enhance response efficiency. Authorities currently rely on CCTV monitoring and user reports for accident detection, which often causes delays.

The system’s mobile integration allows remote access, providing alerts through WhatsApp without human intervention. It also monitors traffic congestion and vehicle breakdowns, offering insights into road safety improvements like sturdier guardrails. Analysts believe this AI-powered solution could complement existing monitoring systems, such as the Traffic Monitoring System (TMS) and CCTVs, and boost predictive capabilities.

Broader adoption faces legal and operational hurdles. Concessionaires cannot currently enforce safety inspections on heavy vehicles without regulatory approval. However, integrating ARIDS with technologies like Weigh-In-Motion systems could streamline enforcement and reduce risks from overloaded or unsafe vehicles.

AI voice theft sparks David Attenborough’s outrage

David Attenborough has criticised American AI firms for cloning his voice to narrate partisan reports. Outlets such as The Intellectualist have used his distinctive voice for topics including US politics and the war in Ukraine.

The broadcaster described these acts as ‘identity theft’ and expressed profound dismay over losing control of his voice after decades of truthful storytelling. Scarlett Johansson has faced a similar issue, with AI mimicking her voice for an online persona called ‘Sky’.

Experts warn that such technology poses risks to reputations and legacies. Dr Jennifer Williams of Southampton University highlighted the troubling implications for Attenborough’s legacy and authenticity in the public eye.

Regulations to prevent voice cloning remain absent, raising concerns about its misuse. The Intellectualist has yet to comment on Attenborough’s allegations.

Samsung unveils AI smart glasses with Google and Qualcomm

Samsung has teamed up with Google and Qualcomm to develop advanced AI-powered smart glasses, set for release in Q3 2025. Initial production will feature 500,000 units, targeting a competitive edge over existing options like Meta’s and Ray-Ban’s smart glasses. Equipped with AI and augmented reality (AR) technologies, the glasses promise enhanced interactivity and user experiences.

The device boasts Qualcomm’s AR1 chip for performance and NXP’s auxiliary processor for added computing. High-resolution imaging is ensured with a 12MP Sony IMX681 camera, supporting superior video and image capture. Lightweight at 50 grams, it offers features like gesture and human recognition, QR-based payments, and extended use powered by a 155mAh battery.

Google’s Gemini large language model will integrate into the software, delivering smarter user interactions and contextual understanding. Samsung disclosed the development during its earnings report, with analysts expecting a possible showcase at the January Galaxy Unpacked event, alongside the Galaxy S25.

Market excitement grows as Samsung enters the competitive smart glasses arena, setting a high standard for innovation and functionality. Observers anticipate a significant shift in wearable technology driven by AI and AR advancements.

Nvidia’s Blackwell AI chips face overheating challenges

Nvidia is grappling with challenges related to its highly anticipated Blackwell AI chips. Customers have raised concerns over overheating issues in its custom server racks, which are critical for training large-scale AI models. The racks, designed to house 72 AI chips each, have undergone multiple design revisions late in the production process. Despite these setbacks, Nvidia remains optimistic about meeting its shipping deadline by mid-2024.

Dell has already begun shipping Nvidia’s GB200 NVL72 server racks to customers such as CoreWeave. Nvidia described the engineering iterations as a normal part of integrating advanced systems into diverse data centre environments. The company highlighted its collaboration with leading cloud service providers to ensure successful implementation.

Past delays in Blackwell production were attributed to a design flaw, which Nvidia’s CEO Jensen Huang openly acknowledged. The flaw, linked to low production yields, required extensive collaboration with Taiwan Semiconductor Manufacturing Company to resolve. While these issues temporarily slowed progress, Nvidia remains on track for its long-term goals.

Nvidia is set to release its fiscal third-quarter earnings on Wednesday, with analysts projecting revenue of $33 billion and net income of $17.4 billion. Although shares dipped slightly on Monday, the stock has soared by 187% this year, underscoring investor confidence in the company’s AI-driven future.

Denmark faces backlash over AI welfare surveillance

Concerns are mounting over Denmark’s use of AI in welfare fraud detection, with Amnesty International condemning the system for violating privacy and risking discrimination. Algorithms developed by Udbetaling Danmark (UDK) and ATP flag individuals suspected of benefit fraud, potentially breaching EU laws. Amnesty argues these tools classify citizens unfairly, resembling prohibited social scoring practices.

The AI models process extensive personal data, including residency, citizenship, and sensitive information that may act as proxies for ethnicity or migration status. Critics highlight the disproportionate targeting of marginalised groups, such as migrants and low-income individuals. Amnesty accuses the algorithms of fostering systemic discrimination while exacerbating existing inequalities within Denmark’s social structure.

Experts warn that the system undermines trust, with many recipients reporting stress and depression linked to invasive investigations. Specific algorithms like ‘Really Single’ scrutinise family dynamics and living arrangements, often without clear criteria, leading to arbitrary decisions. Amnesty’s findings suggest these practices compromise human dignity and fail to uphold transparency.

Amnesty is urging Danish authorities to halt the system’s use and for the EU to clarify AI regulations. The organisation emphasises the need for oversight and bans on discriminatory data use. Danish authorities dispute Amnesty’s findings but have yet to offer transparency on their algorithmic processes.