Controversial California AI bill aims to prevent major disasters

California is set to vote on SB 1047, a bill designed to prevent catastrophic harm from AI systems. The bill targets large AI models—those costing over $100 million to train and using immense computing power—requiring their developers to implement strict safety protocols. These include emergency shut-off mechanisms and third-party audits. The Frontier Model Division (FMD) will oversee compliance and enforce penalties for violations.

While the bill aims to mitigate risks such as AI-driven cyberattacks or weapon creation, it has sparked significant controversy. Silicon Valley leaders, including tech giants and venture capitalists, argue that SB 1047 could stifle innovation and impose undue burdens on startups. Critics claim it may hinder the development of new AI technologies and drive innovation away from California.

Supporters of the bill, including State Senator Scott Wiener and prominent AI researchers, contend that preemptive regulation is essential to safeguard against potential AI disasters. They believe it’s crucial to establish regulations before serious incidents occur. The bill is expected to be approved by the Senate and is now awaiting a decision from Governor Gavin Newsom.

If passed, SB 1047 would not take effect immediately, with the FMD scheduled to be established by 2026. The bill is anticipated to face legal challenges from various stakeholders who are concerned about its implications for the tech industry.

Top investor urges boards to strengthen AI competency

Norway’s $1.7 trillion sovereign wealth fund, one of the world’s largest investors, is calling for improved AI governance at the board level across its portfolio companies. Carine Smith Ihenacho, the fund’s Chief Governance and Compliance Officer, highlighted the need for boards to not only understand how AI is being used but to also establish robust policies to ensure its responsible application. The fund, which holds stakes in nearly 9,000 companies, has already shared its views on AI with the boards of 60 major firms.

The call for enhanced AI competency in Norway comes as the fund has increased its focus on the technology sector, where it has significant investments in major tech companies like Microsoft and Apple. The fund’s emphasis is on ensuring that AI is used responsibly, particularly in high-impact sectors such as healthcare. Smith Ihenacho stressed that boards must be able to address key questions about their AI policies and risks, even if they don’t have a dedicated AI expert.

Despite its concerns, the fund supports the responsible use of AI, recognising its potential to drive innovation and productivity. The push for better AI governance is part of the fund’s broader strategy to maintain high standards in environmental, social, and corporate governance (ESG) across its investments.

As the AI sector continues to grow, the fund’s recommendations reflect a broader trend towards increasing accountability and transparency in the use of emerging technologies.

AI search summaries debut in new countries as Google updates feature

Google is expanding its AI-generated search summaries, known as AI Overviews, to six new countries: Brazil, India, Indonesia, Japan, Mexico, and Britain. This follows a previous rollout in the US, which faced criticism for inaccuracies such as incorrect information and misleading content. The company has since refined the feature, adding restrictions to improve accuracy and reducing reliance on user-generated content from sites like Reddit.

The updated AI Overviews now include more hyperlinks to relevant websites, displayed alongside the AI-generated answers, with plans to integrate clickable links directly within the text. Google aims to balance user experience with publisher traffic, responding to concerns from the media industry about potential impacts on referral traffic.

Hema Budaraju, a senior director at Google, reported improved user satisfaction based on internal data, noting that users of the feature tend to engage more deeply with search queries. These updates come at a time when Google faces legal challenges and competition from AI advancements by rivals like Microsoft-backed OpenAI.

Ridley Scott embraces AI to revolutionise action in ‘Gladiator II’

Ridley Scott, the acclaimed director behind the original Gladiator, is raising the stakes with Gladiator II, promising some of the biggest action sequences of his career. In a recent interview with Empire Magazine, Scott revealed that the film begins with an enormous action scene, surpassing even his work on Napoleon. Paul Mescal stars in the sequel, alongside Pedro Pascal and Denzel Washington, taking audiences on a thrilling new adventure two decades after the Oscar-winning original.

Scott embraces advanced technology, including AI, to bring his vision to life. One of the standout sequences features Paul Mescal’s character, Lucius, facing off against a massive rhino. Scott shared that he used a combination of computerisation and AI to create a lifelike model of the rhino, which was mounted on a robotic platform capable of impressive movements, adding a new layer of realism to the film’s action.

The director’s shift in attitude towards AI is notable, given his earlier concerns about the technology. Last year, Scott expressed fears about AI’s potential to disrupt society, but now he acknowledges its role in filmmaking. Despite his previous reservations, Scott seems to have found a balance between caution and innovation, using AI to push the boundaries of what’s possible on screen.

Sahara AI secures fresh $43 million funding

A decentralised blockchain and AI startup, Sahara AI, has successfully raised $43 million in a Series A funding round. The round saw significant backing from prominent investors including Pantera Capital, Binance Labs, and Polychain Capital. Samsung NEXT also joined the funding alongside Matrix Partners, dao5, and Geekcartel.

The funds will be utilised to expand Sahara AI’s global team, improve the platform’s performance, and grow its developer ecosystem. By leveraging its decentralised platform, Sahara AI aims to reward users, data sources, and AI trainers, rather than just the companies that create AI models. The company’s approach is seen as a shift from the traditional model, promoting transparency and fair compensation.

Founded in April 2023, Sahara AI has already partnered with leading tech firms such as Microsoft, Amazon, and Snap. These collaborations highlight the startup’s rapid growth and the increasing interest in its unique decentralised approach to AI.

As the use of AI continues to rise, concerns around data privacy, copyright, and ethical issues have become more pronounced. Sahara AI’s approach seeks to address these challenges by ensuring transparency and fairness in how AI models are developed and utilised.

AI innovation at Singapore’s NUHS reduces workload

Singapore’s National University Health System (NUHS) is leveraging advanced AI technologies to enhance efficiency and reduce administrative workloads in healthcare. Through the RUSSELL-GPT platform, which integrates large language models (LLMs) via Amazon Web Services (AWS) Bedrock, over a thousand clinicians now benefit from automated tasks such as drafting referrals and summarising patient data, reducing administrative time by 40%.

The NUHS team is working on event-driven Generative AI models that can perform tasks automatically when triggered by specific events, such as drafting discharge letters without needing any prompts. This approach aims to streamline processes further and reduce the administrative burden on healthcare staff.

Ensuring patient data security is a top priority for NUHS, with robust measures in place to keep data within Singapore and comply with local privacy laws. RUSSELL-GPT also includes features to mitigate the risks of AI hallucinations, with mandatory training for users on recognising and managing such occurrences.

Despite the promise of LLMs, NUHS acknowledges that these models are not a cure-all. Classical AI still plays a critical role in tasks like clustering information and providing predictive insights, underlining the need for a balanced use of it in healthcare.

SoftBank abandons AI chip partnership with Intel and shifts focus to TSMC

SoftBank has abandoned its plan to develop an AI chip in partnership with Intel, according to a report by the Financial Times. The Japanese tech investor had intended to collaborate with Intel to challenge Nvidia, but the deal fell through after Intel failed to meet SoftBank’s requirements, as reported by sources familiar with the situation. SoftBank attributed the breakdown of negotiations to Intel’s inability to deliver on speed and production volume demands.

As a result, SoftBank has shifted its focus to discussions with Taiwan Semiconductor Manufacturing Co (TSMC), the world’s largest contract chipmaker. The report noted that the collapse of talks occurred before Intel’s significant cost-cutting measures, including massive layoffs in early August.

Why does this matter?

These events highlight the intensifying competition in the AI chip market, where companies like Nvidia currently dominate. SoftBank’s decision to abandon its partnership with Intel and shift focus to TSMC underscores the challenges Intel faces in keeping pace with AI-driven innovations. The move also signals potential shifts in global chip production dynamics, with TSMC further solidifying its role as a key player. Additionally, it reflects the broader implications of Intel’s internal struggles, such as meeting demand and cost-cutting, on its competitiveness in critical emerging technologies like AI.

English Premier League to upgrade offside calls with new technology

The English Premier League is set to enhance offside decision-making with new technology from Genius Sports. Multiple iPhones, paired with advanced machine-learning models, will assist referees in making more accurate offside calls. Traditional Video Assistant Referee (VAR) systems have faced criticism for slow reviews and inconsistent decisions, leading to this shift.

Genius Sports developed ‘Semi-Assisted Offside Technology’ (SAOT) as part of its GeniusIQ system. Up to 28 iPhones will be placed around the pitch to generate 3D models of players, offering precise offside line determinations. Expensive 4K cameras will be replaced by iPhones, which capture between 7,000 and 10,000 data points per player.

Strategically positioned on custom rigs, iPhones will cover optimal areas of the pitch. Data collected will be processed by the GeniusIQ system, using predictive algorithms to assess player positions even when obscured. High framerate recording and local processing capabilities further enhance the system’s accuracy.

Genius Sports plans to fully implement the system in the Premier League by the end of the year. While the exact date remains unconfirmed, this marks a significant advancement in football technology, promising a more precise and consistent approach to offside rulings.

AI push in India: Google tackles language and farming challenges

Google is intensifying its AI initiatives in India, with a focus on addressing language barriers and improving agricultural efficiency. Abhishek Bapna, Director of Product Management at Google DeepMind, emphasized the economic importance of breaking language barriers, particularly in areas like healthcare and banking. Google’s AI chatbot, Gemini, supports over 40 languages globally, including nine Indian languages, and aims to enhance language quality further.

In collaboration with the Indian Institute of Science, Google’s Project Vaani provides over 14,000 hours of speech data from 80 districts, empowering developers to create more efficient AI models for India’s multilingual environment. Additionally, the IndicGenBench benchmark helps fine-tune language models for Indian languages. These efforts are crucial to improving the accuracy and reach of AI in the country.

Google is also piloting its Agricultural Landscape Understanding (ALU) Research API in Telangana, designed to boost farm yields and enhance market access. The initiative aligns with Google’s broader goals of improving livelihoods and addressing climate change, offering granular data-driven insights at the farm field level.

These initiatives are expected to not only assist farmers but also attract end users like banks and insurance companies. Once the pilot program is completed, Google plans to scale the project to work with state governments across India.

Australia set six-month deadline for AI use disclosure

Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.

The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.

The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.

Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.