AI chatbots in healthcare: Balancing potential and privacy concerns amidst regulatory gaps

Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.

Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.

X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.

Google funds AI-driven scientific breakthroughs

Google has announced a $20 million fund, with an additional $2 million in cloud credits, to support researchers using AI to tackle complex scientific challenges. The initiative, unveiled by Google DeepMind CEO Demis Hassabis at the AI for Science Forum in London, is part of Google’s broader strategy to foster innovation and collaboration with academic and non-profit organisations globally.

The funding will prioritise interdisciplinary projects addressing challenges in fields such as rare disease research, experimental biology, sustainability, and materials science. Google plans to distribute the funding to approximately 15 organisations by 2026, ensuring each grant is substantial enough to drive impactful breakthroughs. The programme reflects Google’s aim to position itself as a key partner in advancing science through AI, building on successes like AlphaFold, which recently earned DeepMind leaders a Nobel Prize in Chemistry.

The move aligns with a growing trend among Big Tech firms investing heavily in AI-driven research. Amazon’s AWS recently committed $110 million to similar grants, underscoring the race to attract leading scientists and researchers into their ecosystems. Hassabis expressed hope that the initiative would inspire greater collaboration between the private and public sectors and further demonstrate AI’s transformative potential in science.

California AI firm to unveil advanced networking chip in 2025

California-based AI startup Enfabrica has raised $115M in a funding round to tackle one of the field’s most pressing challenges, enabling vast networks of AI chips to work seamlessly at scale. The company, founded by former engineers from Broadcom and Alphabet, plans to release its new networking chip early next year. This chip aims to enhance efficiency by addressing bottlenecks in how AI computing chips interact with networks, a problem that slows down data processing and wastes resources.

The startup claims its technology can scale AI networks to connect up to 500,000 chips, significantly surpassing the current limit of around 100,000. This breakthrough could speed up the training of larger AI models, reducing time and costs associated with unreliable or inaccurate outcomes. “The attributes of the network, like bandwidth and resiliency, are critical for scaling AI efficiently,” said Enfabrica CEO Rochan Sankar.

Investors in the funding round included Spark Capital, Maverick Silicon, and corporate backers like Arm Holdings and Samsung Ventures. Nvidia, an industry leader in AI chips, also participated, signaling strong support for Enfabrica’s mission to optimise AI infrastructure.

Meta launches new AI division for businesses

Meta has hired Clara Shih, previously Salesforce’s CEO of AI, to lead its newly formed Business AI group. Shih announced her move in a LinkedIn post, stating that her team aims to develop cutting-edge AI tools to help businesses on Meta platforms like Instagram, Facebook, and WhatsApp. The initiative seeks to empower businesses by making AI accessible and effective in driving growth.

The Business AI group will focus on leveraging Meta’s Llama language models to offer solutions for advertising and content creation. While specific tools have not been revealed, AI-generated ad creation is a likely feature. Meta’s strategy hinges on enhancing its platforms with AI tools, boosting ad engagement, and increasing revenue without directly charging for AI products.

Shih’s appointment comes amid intensified competition in enterprise AI. Salesforce, where Shih previously worked, has struggled to fully capitalise on the AI boom. Shih now has an opportunity to steer Meta’s efforts in reshaping how businesses interact with AI, marking a significant shift for the company’s focus on business-oriented innovations.

UK’s CMA clears Google-Anthropic partnership

The UK’s Competition and Markets Authority (CMA) has decided against investigating the partnership between Google’s parent company, Alphabet, and AI startup Anthropic. Following a detailed review, the CMA found the agreement did not qualify as a merger under UK competition law.

Concerns over competition prompted the CMA to scrutinise the deal, focusing on whether it gave Alphabet control over Anthropic’s business. The authority concluded that Alphabet’s involvement, including financial support and computing resources, did not result in material influence or loss of independence for Anthropic.

The agreement includes Google providing Anthropic with cloud services, distributing its AI models, and offering convertible debt financing. While the partnership is significant, Anthropic’s UK turnover fell below the £70m threshold required for it to qualify as a merger.

This ruling follows similar CMA decisions involving tech companies and AI startups, including clearing Microsoft’s investment in Mistral and Amazon’s $4bn stake in Anthropic. The watchdog remains vigilant about potential anti-competitive practices in the rapidly growing AI sector.

California passes new law regulating AI in healthcare

California Governor Gavin Newsom has signed Assembly Bill 3030 (AB 3030) into law, which will regulate the use of generative AI (GenAI) in healthcare. Effective 1 January 2025, the law mandates that any AI-generated communications related to patient care must include a clear disclaimer informing patients of its AI origin. It also instructs patients to contact human healthcare providers for further clarification.

The bill is part of a larger effort to ensure patient transparency and mitigate risks linked to AI in healthcare, especially as AI tools become increasingly integrated into clinical environments. However, AI-generated communications that have been reviewed by licensed healthcare professionals are exempt from these disclosure requirements. The law focuses on clinical communications and does not apply to non-clinical matters like appointment scheduling or billing.

AB 3030 also introduces accountability for healthcare providers who fail to comply, with physicians facing oversight from the Medical Board of California. The law aims to balance AI’s potential benefits, such as reducing administrative burdens, with the risks of inaccuracies or biases in AI-generated content. California’s move is part of broader efforts to regulate AI in healthcare, aligning with initiatives like the federal AI Bill of Rights.

As the law takes effect, healthcare providers in California will need to adapt to these new rules, ensuring that AI-generated content is flagged appropriately while maintaining the quality of patient care.

Hollywood embraces AI with Promise studio launch

A new studio, Promise, has been launched to revolutionise filmmaking with the use of generative AI. Backed by venture capital firm Andreessen Horowitz and former News Corp President Peter Chernin, the startup is setting its sights on blending AI with Hollywood storytelling. The announcement coincided with the conclusion of its fundraising round.

Founded by Fullscreen’s CEO George Strompolos, ex-YouTube executive Jamie Byrne, and AI artist Dave Clark, the studio aims to harness the GenAI boom to streamline and enhance content creation. Promise is collaborating with Hollywood stakeholders to develop a multi-year slate of films and series, combining creative expertise with cutting-edge technology.

The company is also developing an AI-driven software tool named Muse, designed to assist artists throughout the production process. Muse aims to integrate generative AI at every stage, offering a streamlined approach to creating movies and shows. Promise hopes to position itself as a leader in the evolving landscape of AI-powered media.

Generative AI has gained traction in Hollywood, with tools like OpenAI’s Sora and Adobe’s video-generation model prompting industry interest. These innovations have spurred discussions about potential collaborations to reduce costs and speed up production. Promise’s launch adds to this momentum, marking a step forward in AI-driven entertainment.

OpenAI faces lawsuit from Indian News Agency

Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.

The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.

OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.

New startup tackles AI energy demands with analog tech

With AI adoption surging, data centers are bracing for a 160% jump in electricity consumption by 2030, driven by the energy demands of GPUs. Sagence AI, a startup led by Vishal Sarin, is addressing this challenge by developing analog chips that promise greater energy efficiency without sacrificing performance.

Unlike traditional digital chips, Sagence’s analog designs minimise memory bottlenecks and offer higher data density, making them a viable option for specialised AI applications in servers and mobile devices. While analog chips pose challenges in precision and programming, Sagence aims to complement, not replace, digital solutions, delivering cost-effective and eco-friendly alternatives.

Backed by $58M in funding from investors like TDK Ventures and New Science Ventures, Sagence plans to launch its chips in 2025. As it scales operations, the startup faces stiff competition from industry giants and will need to prove its technology can outperform established systems while maintaining lower energy consumption.

AI voice theft sparks David Attenborough’s outrage

David Attenborough has criticised American AI firms for cloning his voice to narrate partisan reports. Outlets such as The Intellectualist have used his distinctive voice for topics including US politics and the war in Ukraine.

The broadcaster described these acts as ‘identity theft’ and expressed profound dismay over losing control of his voice after decades of truthful storytelling. Scarlett Johansson has faced a similar issue, with AI mimicking her voice for an online persona called ‘Sky’.

Experts warn that such technology poses risks to reputations and legacies. Dr Jennifer Williams of Southampton University highlighted the troubling implications for Attenborough’s legacy and authenticity in the public eye.

Regulations to prevent voice cloning remain absent, raising concerns about its misuse. The Intellectualist has yet to comment on Attenborough’s allegations.