OpenAI, the developer behind ChatGPT, is backing a new California bill, AB 3211, to ensure transparency in AI-generated content. The proposed bill would require tech companies to label content created by AI, which ranges from innocuous memes to deepfakes that could potentially mislead voters in political campaigns. The legislation has gained attention as concerns grow over the impact of AI-generated material, especially in an election year.
The bill has somewhat been overshadowed by another California AI bill, SB 1047, which mandates safety testing for AI models and has faced resistance from the tech industry, including OpenAI. This resistance highlights the complexity of regulating AI while balancing innovation and public safety.
California lawmakers have introduced 65 AI-related bills in this legislative session, covering algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these proposals have yet to advance, leaving AB 3211 as one of the more prominent measures still in play.
OpenAI has expressed the importance of transparency for AI-generated content, especially during elections, advocating for measures like watermarking to help users identify the origins of what they see online. Considering that AI-generated content is a global issue, there are strong concerns that it could influence the upcoming elections in the USA and in other countries.
AB 3211 has already passed the state Assembly with unanimous support and recently cleared the Senate Appropriations Committee. The bill requires a full Senate vote before the legislative session ends on 31 August. If it passes, it will go to Governor Gavin Newsom for approval or veto by 30 September.
Researchers from the Universities of Edinburgh and Dundee are pioneering an AI tool designed to detect early signs of dementia through routine brain scans. Utilising a large dataset of CT and MRI scans from Scottish patients, the team aims to analyse these images with linked health records to identify patterns that may indicate a heightened risk of dementia.
The ultimate goal is to create a digital healthcare tool for radiologists to assess dementia risk during routine scans. Early identification of high-risk patients could lead to the development of more effective treatments, particularly for Alzheimer’s and vascular dementia. The project, known as SCAN-DAN, is part of a larger global collaboration called NEURii, which focuses on advancing digital health tools.
The research is supported by NEURii, which brings together international expertise and funding to overcome barriers to commercialisation. By collaborating with partners like the NHS and the Scottish National Safe Haven, the project ensures the secure handling of patient data, aiming to integrate AI tools into everyday clinical practice.
Experts believe that early diagnosis is crucial for managing dementia effectively. With costly and limited treatments, projects like SCAN-DAN offer hope for more accessible and reliable solutions. The researchers are confident that this initiative could significantly impact how dementia is diagnosed and treated.
A recent study reveals that nearly half of AI-based medical devices approved by the US Food and Drug Administration (FDA) have not been trained on real patient data. Of 521 devices examined, 43% lacked published clinical validation, raising concerns about their effectiveness in real-world settings.
The study highlights that only 22 of these devices were validated through randomised controlled trials, considered the ‘gold standard’ for clinical testing. Some devices relied on ‘phantom images’ instead of real patient data, while others used retrospective or prospective validation methods. Researchers emphasise the importance of conducting proper clinical validation to ensure these technologies are safe and effective.
Researchers hope their findings will prompt the FDA and the medical industry to improve the credibility of AI devices by conducting and publishing clinical validation studies. They believe enhancing these processes’ transparency and rigour will significantly impact patient care.
In Australia, similar regulations exist, with the Therapeutic Goods Administration (TGA) requiring AI-based software to provide information about its training data and suitability for the Australian population. Medical devices must also meet general clinical evidence guidelines to ensure safety and effectiveness.
Google has introduced a new AI-powered chat assistant to help YouTube creators recover hacked accounts. Currently, in testing, the tool is accessible to select users and aims to guide them through securing their accounts. The AI assistant will assist affected users by helping them regain control of their login details and reverse any changes made by hackers. Presently, the feature supports only the English language, but there are plans to expand its availability.
To use the new tool, users must visit the YouTube Help web page and log into their Google Account. They will then find the option to ‘Recover a hacked YouTube channel’ under the Help Centre menu. This new option opens a chat window with the AI assistant, who will guide them through securing their accounts.
Google’s latest innovation reflects its ongoing commitment to enhancing user security. Although the tool is in its early stages, efforts are being made to make it available to all YouTube creators.
As cyber threats evolve, Google’s AI assistant represents an important step forward in providing robust security solutions. The initiative shows the company’s dedication to protecting its users’ online presence.
Islamic State supporters increasingly use AI to bolster their online presence and create more sophisticated propaganda. A recent video praised a deadly attack in Russia, underscoring the evolving methods used by extremists. While AI has been part of the Islamic State’s toolkit for some time, the video’s high production quality marked a new level of sophistication.
Experts have observed a broader trend of extremist groups exploiting those tools to bypass safety controls on social media. These groups use it to generate content that mixes extremist messaging with popular culture, making it easier to reach and radicalise potential recruits. A study by the Combating Terrorism Center revealed that AI could facilitate attack planning and recruitment, with some tools already providing specific and dangerous guidance.
Why does this matter?
The misuse by extremist groups highlights the urgent need for stronger regulations. While some tech companies have developed ethical standards, concerns persist about the effectiveness of current safety measures. The rapid deployment of technologies without adequate safeguards poses a significant risk as these tools become more accessible to malicious actors.
As the debate over regulation continues, the potential for extremist groups to exploit this technology grows. Experts warn that without more robust oversight, AI could become a powerful tool in the hands of those seeking to spread violence and extremism.
Nvidia is revolutionising its chip design process by leveraging large language models (LLMs) and autonomous AI agents. These innovations are being used to speed up the development of GPUs, CPUs, and networking chips, significantly enhancing design quality and productivity. The models include prediction, optimisation, and automation tools, which help engineers improve designs, generate code, and debug issues more efficiently.
The company has trained an LLM specifically on Verilog, a hardware description language, to accelerate the creation of its systems. This model assists in speeding up the design and verification processes while automating manual tasks, supporting Nvidia’s goal of maintaining a yearly product release cycle. As Nvidia continues to develop increasingly complex architectures, such as the Blackwell architecture, these AI tools are vital in meeting the challenges of next-generation designs.
At the Hot Chips conference in the US, Mark Ren, Nvidia’s director of design automation research, will provide insights into these AI models. He will highlight their applications in chip design, focusing on how agent-based systems powered by LLMs transform the field by autonomously completing tasks, interacting with designers, and learning from experience.
The use of AI agents for tasks like timing report analysis and cell cluster optimisation has already gained recognition, with a recent project winning best paper at the IEEE International Workshop on LLM-Aided Design. Nvidia’s advancements demonstrate the critical role of AI in pushing the boundaries of chip design.
Despite its declining quarterly revenue, Baidu, in its statement, assured people that its leading position in AI in China will position it to navigate the increasingly competitive market. The comment comes from an AI price war in China, where companies are increasingly lowering the prices of large language models powering generative AI technologies.
Ernie, Baidu’s large language model, has been integrated into various applications to enhance user experience and is touted to be a competitor to OpenAI’s GPT. According to Baidu CEO Robin Li, the company’s Ernie platform processes over 600 million AI requests daily, the highest volume among Chinese firms. Li added, ‘Competition will be fierce over the next 2 to 3 years.’
As China’s dominant search engine, most revenue comes from ads. However, the company has strategically pivoted to AI by investing significantly in the sector to position itself as an ‘AI company’. The company has expanded its AI offerings by introducing a paid version of its Ernie-powered chatbot for public use and offering API services to developers via cloud computing. “Our advertising business is currently facing pressure caused by a combination of external factors and our proactive efforts to accelerate the AI-driven renovation of search,” Li said during a conference call with analysts.
Why does this matter?
The dipped revenue indicates Baidu’s difficulty in transitioning from search ads to AI as China faces an economic slump. Baidu’s news of prioritising AI as its search revenue stalls can be located as a part of the broader tech trend where, with the AI gold rush, companies increasingly look to increase their AI portfolios to ensure they retain their competitiveness and don’t fall behind in the AI market that is expected to accrue massive business value.
Perplexity AI, backed by Jeff Bezos and Nvidia, has announced its intention to initiate advertisement on its AI-based search engine platform by the fourth quarter of the year. Last month, the company rolled out a publisher’s program with partners comprising IME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.
The AI-powered search engine space is still in its infancy, opening a massive market for new players. Among the big tech giants, Google has integrated AI in its search by providing AI-generated summaries or overviews for each search request. Meanwhile, its rival Microsoft has integrated OpenAI technology and launched the AI-powered Bing.
Why does it matter?
This move can potentially threaten Google’s dominant position in the industry. Through its search engine supremacy for two decades, Google became one of the world’s most valuable companies through its ad-based revenue model. Since ChatGPT launched, existing and upcoming search engines have attempted to integrate AI into web search and bring about a new business model in the search engine space.
Chinese entities linked to the state are turning to cloud services from Amazon and its rivals to access advanced US chips and AI capabilities that are otherwise restricted. Over the past year, at least 11 Chinese organisations have sought cloud services to bypass US export restrictions on high-end AI chips, according to tender documents.
Amazon Web Services (AWS) was specifically mentioned as a provider in several cases, though Chinese intermediaries were used to access the services. US regulations focus on the export or transfer of physical technology, leaving a loophole for cloud-based access. This has allowed US companies to profit from China’s growing demand for computing power.
Efforts to close this loophole are ongoing. US legislators have expressed concerns, and the Commerce Department is considering new rules to tighten control over remote access to advanced technology. AWS has stated that it complies with all applicable laws, including trade regulations in the countries where it operates.
Microsoft’s cloud services have also been sought by Chinese universities for AI projects. These activities highlight the increasing demand for US technology in China and the challenges in enforcing export controls. Both Amazon and Microsoft declined to comment on specific deals, but the implications for US-China tech relations are significant.
Several Chinese state-linked entities are turning to cloud services to access restricted US technology, according to recent public tender documents. By using cloud platforms like Amazon Web Services (AWS), these entities gain access to advanced chips and AI capabilities that would otherwise be unavailable due to US trade restrictions.
Entities like Zhejiang Lab and the National Center of Technology Innovation for EDA have expressed interest in using AWS for AI development. Others, such as Shenzhen University and Fujian Chuanzheng Communications College, have reportedly utilised Nvidia chips through cloud services, circumventing US export bans.
Microsoft’s Azure platform has also attracted attention from Chinese institutions like Chongqing Changan Automobile Co and Sichuan University, which are exploring generative AI technology. The ability to integrate these advanced tools into their systems is seen as critical for maintaining competitiveness.
Concerns remain over the use of US technology by Chinese organisations, especially those with potential military applications. Universities such as Southern University of Science and Technology and Tsinghua University have pursued cloud access to Nvidia chips, despite US efforts to restrict such technology transfers.