The UK’s Competition and Markets Authority (CMA) has decided against investigating the partnership between Google’s parent company, Alphabet, and AI startup Anthropic. Following a detailed review, the CMA found the agreement did not qualify as a merger under UK competition law.
Concerns over competition prompted the CMA to scrutinise the deal, focusing on whether it gave Alphabet control over Anthropic’s business. The authority concluded that Alphabet’s involvement, including financial support and computing resources, did not result in material influence or loss of independence for Anthropic.
The agreement includes Google providing Anthropic with cloud services, distributing its AI models, and offering convertible debt financing. While the partnership is significant, Anthropic’s UK turnover fell below the £70m threshold required for it to qualify as a merger.
This ruling follows similar CMA decisions involving tech companies and AI startups, including clearing Microsoft’s investment in Mistral and Amazon’s $4bn stake in Anthropic. The watchdog remains vigilant about potential anti-competitive practices in the rapidly growing AI sector.
California Governor Gavin Newsom has signed Assembly Bill 3030 (AB 3030) into law, which will regulate the use of generative AI (GenAI) in healthcare. Effective 1 January 2025, the law mandates that any AI-generated communications related to patient care must include a clear disclaimer informing patients of its AI origin. It also instructs patients to contact human healthcare providers for further clarification.
The bill is part of a larger effort to ensure patient transparency and mitigate risks linked to AI in healthcare, especially as AI tools become increasingly integrated into clinical environments. However, AI-generated communications that have been reviewed by licensed healthcare professionals are exempt from these disclosure requirements. The law focuses on clinical communications and does not apply to non-clinical matters like appointment scheduling or billing.
AB 3030 also introduces accountability for healthcare providers who fail to comply, with physicians facing oversight from the Medical Board of California. The law aims to balance AI’s potential benefits, such as reducing administrative burdens, with the risks of inaccuracies or biases in AI-generated content. California’s move is part of broader efforts to regulate AI in healthcare, aligning with initiatives like the federal AI Bill of Rights.
As the law takes effect, healthcare providers in California will need to adapt to these new rules, ensuring that AI-generated content is flagged appropriately while maintaining the quality of patient care.
A new studio, Promise, has been launched to revolutionise filmmaking with the use of generative AI. Backed by venture capital firm Andreessen Horowitz and former News Corp President Peter Chernin, the startup is setting its sights on blending AI with Hollywood storytelling. The announcement coincided with the conclusion of its fundraising round.
Founded by Fullscreen’s CEO George Strompolos, ex-YouTube executive Jamie Byrne, and AI artist Dave Clark, the studio aims to harness the GenAI boom to streamline and enhance content creation. Promise is collaborating with Hollywood stakeholders to develop a multi-year slate of films and series, combining creative expertise with cutting-edge technology.
The company is also developing an AI-driven software tool named Muse, designed to assist artists throughout the production process. Muse aims to integrate generative AI at every stage, offering a streamlined approach to creating movies and shows. Promise hopes to position itself as a leader in the evolving landscape of AI-powered media.
Generative AI has gained traction in Hollywood, with tools like OpenAI’s Sora and Adobe’s video-generation model prompting industry interest. These innovations have spurred discussions about potential collaborations to reduce costs and speed up production. Promise’s launch adds to this momentum, marking a step forward in AI-driven entertainment.
Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.
The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.
OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.
With AI adoption surging, data centers are bracing for a 160% jump in electricity consumption by 2030, driven by the energy demands of GPUs. Sagence AI, a startup led by Vishal Sarin, is addressing this challenge by developing analog chips that promise greater energy efficiency without sacrificing performance.
Unlike traditional digital chips, Sagence’s analog designs minimise memory bottlenecks and offer higher data density, making them a viable option for specialised AI applications in servers and mobile devices. While analog chips pose challenges in precision and programming, Sagence aims to complement, not replace, digital solutions, delivering cost-effective and eco-friendly alternatives.
Backed by $58M in funding from investors like TDK Ventures and New Science Ventures, Sagence plans to launch its chips in 2025. As it scales operations, the startup faces stiff competition from industry giants and will need to prove its technology can outperform established systems while maintaining lower energy consumption.
David Attenborough has criticised American AI firms for cloning his voice to narrate partisan reports. Outlets such as The Intellectualist have used his distinctive voice for topics including US politics and the war in Ukraine.
The broadcaster described these acts as ‘identity theft’ and expressed profound dismay over losing control of his voice after decades of truthful storytelling. Scarlett Johansson has faced a similar issue, with AI mimicking her voice for an online persona called ‘Sky’.
Experts warn that such technology poses risks to reputations and legacies. Dr Jennifer Williams of Southampton University highlighted the troubling implications for Attenborough’s legacy and authenticity in the public eye.
Regulations to prevent voice cloning remain absent, raising concerns about its misuse. The Intellectualist has yet to comment on Attenborough’s allegations.
At the SC24 conference, Dell unveiled a range of AI-powered infrastructure products designed to overcome common obstacles in AI adoption, such as data quality, cost, and sustainability concerns. The company’s focus is on providing solutions that allow businesses to unlock the full potential of their data to remain competitive in the rapidly evolving AI landscape.
Among the highlights were three new server products: the PowerEdge XE7740, XE9685L, and the updated Integrated Rack 5000 series. These servers cater to both AI inference and high-density training needs, with features like support for multiple NVIDIA GPUs and enhanced network performance, ensuring scalability for enterprise AI workloads.
Dell also announced a significant update to its Data Lakehouse, now integrating Apache Spark to support unified access control. These innovations aim to simplify the management of AI and high-performance computing workloads, offering improved insights and more efficient processes.
As part of its broader strategy, Dell revealed partnerships with NVIDIA to optimise its AI infrastructure with advanced GPUs and software. Additionally, new services like Dell Data Management and sustainable data centre solutions are set to help businesses build more efficient AI systems while addressing environmental concerns.
Meta has started rolling out AI capabilities for its Ray-Ban Meta AR glasses in France, Italy, and Spain. Users in these countries can now access Meta AI, the company’s voice-activated assistant, which supports French, Italian, and Spanish alongside English.
The rollout follows months of efforts to align the glasses with Europe’s regulatory requirements. Meta expressed excitement about bringing its innovative features to the region and plans further expansion. However, certain features available in other regions, such as multimodal capabilities using the glasses’ cameras, remain unavailable in Europe for now.
Meta has faced challenges complying with Europe’s AI regulations, including the EU’s AI Act and GDPR privacy laws. These rules govern AI training practices, particularly regarding data sourced from Instagram and Facebook users. Earlier this year, EU regulators temporarily restricted Meta from training AI models on European user data.
After making adjustments to its opt-out processes, Meta resumed training on UK data and introduced AI features in several countries. The company has yet to disclose broader compliance measures for the rest of the EU, though it remains committed to addressing regulatory feedback.
FPT Vietnam and Ericsson have partnered to accelerate the adoption of 5G technology in Vietnam and drive advancements in AI and digital transformation. The collaboration will focus on developing applications that highlight the potential of 5G in key sectors such as healthcare, manufacturing, and retail, especially through augmented and virtual reality.
By leveraging Ericsson’s 5G expertise, FPT aims to enhance its AI capabilities and create more sophisticated, data-driven solutions. That partnership is designed to speed up 5G deployment and unlock new opportunities for consumer and enterprise markets, ultimately boosting Vietnam’s digital infrastructure. It also marks a significant milestone as Vietnam becomes a key market for 5G technology, laying the foundation for broader international collaboration.
The partnership was officially announced during FPT Techday 2024, a major technology forum that brings together industry leaders, businesses, and technology enthusiasts. This event showcased the strategic importance of the collaboration and its potential to foster innovation and business growth in Vietnam. FPT and Ericsson are advancing 5G adoption through this initiative and enabling local businesses to maximise the benefits of next-generation connectivity.
President Joe Biden and China’s President Xi Jinping held a two-hour meeting on the sidelines of the APEC summit on Saturday. Both leaders reached a significant agreement to prevent AI from controlling nuclear weapons systems and made progress on securing the release of two US citizens wrongfully detained in China. Biden also pressured Xi to reduce North Korea’s support for Russia in the ongoing Ukraine conflict.
The breakthrough in nuclear safety, particularly the commitment to maintain human control over nuclear decisions, was reported as an achievement for Biden’s foreign policy. Xi, in contrast, called for greater dialogue and cooperation with the US and cautioned against efforts to contain China. His remarks also acknowledged rising geopolitical challenges, hinting at the difficulties that may arise under a Trump presidency. The meeting showcased a shift in tone from their previous encounter in 2023, reflecting a more constructive dialogue despite underlying tensions.
Reuters reported that it remains uncertain whether the statement will result in additional talks or concrete actions on the issue. The US has long held the position that AI should assist and enhance military capabilities, but not replace human decision-making in high-stakes areas such as nuclear weapons control. Last year, the Biden-Harris administration announced the Political declaration on responsible military use of AI and autonomy, and more than 20 countries endorsed the declaration. The declaration specifically underlines that “military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control”.