China has approved 487 new AI algorithms for use in deepfake technologies. These include products from major domestic tech companies such as Baidu, Alibaba, and Tencent and foreign firms like Hewlett-Packard. The approval is part of the country’s regulatory efforts under the Cyberspace Administration of China (CAC), which mandates the registration of AI algorithms used in deepfakes. Notable approvals include Baidu’s image generator, Tencent’s search algorithm, and Alibaba’s document creation tool.
The green light for new algorithms is the second-largest since the regulations took effect in January 2023. The regulations aim to control technologies that create realistic virtual scenes using deep learning and augmented reality. Companies failing to comply face removal from domestic app stores. The CAC has released six allowlists, with the biggest batch of 492 algorithms approved in June.
Cai Peng, a partner at Beijing’s Zhong Lun Law Firm, notes that the increasing size of these lists indicates a more streamlined process between regulators and applicants. The roughly two-month application process involves detailed document submission and revisions as requested by the CAC.
The latest approvals include a healthcare knowledge algorithm for Douyin, a music generator from Microsoft’s Xiaoice, and a character dialogue generator for NetEase. Foreign brands like HP and Yum China also had their algorithms approved. China’s regulatory framework for AI, which includes mandatory registration of generative AI models before public use, reflects the country’s striving to leadership in AI regulation.
Grok, an AI chatbot on X (formerly Twitter), has been accused of spreading false information about Vice President Kamala Harris’s eligibility for the 2024 presidential ballot. An open letter from five US secretaries of state, led by Minnesota’s Steve Simon, calls for Elon Musk, CEO of X, to address this issue urgently. The letter claims Grok misled users by suggesting ballot deadlines had passed in several states despite them being valid.
The misinformation, disseminated widely before being corrected, has raised concerns about the accuracy of information on X. Although Grok includes a disclaimer urging users to verify facts, the incorrect claims were heavily circulated before being addressed.
The controversy highlights ongoing issues with X’s moderation policies. Under Musk, X has significantly reduced its moderation staff, which has affected its ability to manage misinformation effectively. Additionally, Musk has faced criticism for resharing misleading content and making provocative statements on social media.
The incident underscores X’s challenges in maintaining accurate information and the broader implications for online political discourse.
Tencent has participated in a $300 million financing round for the Chinese AI startup Moonshot, boosting the company’s valuation to $3.3 billion. The investment aligns with the strategy of Alibaba, another major tech company, which is also backing promising AI ventures. Gaorong Capital and existing investor Alibaba also took part in this funding round.
Moonshot, founded in Shanghai in March 2023, is one of the ‘Six Little Dragons,’ a group of rapidly growing Chinese AI startups aiming to compete with the likes of OpenAI in the US. Earlier this year, Alibaba led a $1 billion funding round for Moonshot. Both Alibaba and Tencent hold stakes in most of these six companies, which include Baichuan and MiniMax.
Recent months have seen significant capital inflows into Chinese AI firms, with major companies and venture capitalists investing heavily to establish a strong presence in the AI market. Baichuan, another member of the ‘Six Little Dragons,’ recently completed a funding round, securing approximately 5 billion yuan with contributions from Alibaba, Tencent, Xiaomi, and other notable investors.
China has launched a $40 billion state investment fund to boost its semiconductor industry, a move aimed at countering US restrictions on semiconductor exports. The fund is part of a broader effort to enhance domestic AI development and secure a leading position in the global AI market.
The London-listed bank, Lloyds Banking Group, has appointed Rohit Dhawan, a former executive at Amazon Web Services (AWS), as its first group director of AI and analytics. With a PhD in AI from the University of Sydney, Dhawan previously led data and AI strategy for AWS across the Asia-Pacific region, where he played a key role in implementing AI in customer and operational processes.
Rohit’s arrival marks a significant step in Lloyds’ ambition to embed AI deeply into its operations. Ranil Boteju, the bank’s chief data and analytics officer, highlighted Dhawan’s extensive experience in delivering technology-driven change at scale and speed. The new director is expected to enhance AI outcomes across various business priorities, ensuring a consistent and strategic integration of AI capabilities.
This appointment is part of Lloyds’ broader push to bolster its technology and data teams. So far in 2024, the bank has hired around 1,500 specialists, bringing the total to over 4,000 new recruits in the last two and a half years. Dhawan expressed his enthusiasm for advancing Lloyds’ ambitious AI strategy, aligning it with the Group’s goal to help Britain prosper.
Despite these strategic advancements, Lloyds has faced financial challenges. The bank reported a 28 per cent drop in net interest income for the first quarter of 2024, primarily due to higher operating costs and peaking interest rates. This resulted in a pre-tax profit of £1.63 billion, consistent with forecasts. Additionally, Lloyds’ shares fell by 2.64 per cent to 53.88p in early afternoon trading on Monday.
Scientists have developed GROVER, an AI model trained to decode human DNA. This innovative tool, created by a team at the Biotechnology Center of Dresden University of Technology, treats DNA as a text, learning its rules and context to draw out functional information from sequences. Published in Nature Machine Intelligence, GROVER has the potential to revolutionise genomics and accelerate personalised medicine.
Understanding DNA’s complex language has been a longstanding challenge. While only 1–2% of the genome consists of genes that code for proteins, the rest contains sequences with multiple functions, many of which remain a mystery. Dr. Anna Poetsch and her team believe AI can help unravel these non-coding regions. GROVER, trained on a reference human genome, has shown the ability to predict DNA sequences and extract contextual information, such as identifying gene promoters and protein binding sites.
GROVER’s development involved creating a DNA dictionary. Using a method inspired by compression algorithms, the team analysed the genome to find common multi-letter combinations, fragmenting the DNA into ‘words’ that improved GROVER’s predictive accuracy. This approach distinguishes GROVER from previous attempts and enhances its ability to decode the genetic language.
Dr. Poetsch and her colleagues are optimistic about GROVER’s impact on genomics. By understanding the rules of DNA through a language model, they hope to uncover deeper biological meanings, advancing both genomics and personalised medicine. GROVER promises to unlock the layers of genetic code, revealing crucial information about human biology, disease predispositions, and treatment responses.
SEMI Europe, a leading semiconductor industry group, urged the EU to minimise restrictions on outbound investments in foreign chip technology. The EU is considering proposals to screen such investments, which could impact European funding in the global semiconductor, AI, and biotechnology sectors. However, no decisions are expected until 2025.
The US has already proposed rules to limit investments in China to protect national security and prevent the transfer of advanced technology. SEMI Europe argues that excessive restrictions could hinder European companies’ ability to invest and innovate, potentially compromising their competitive edge.
The organisation criticised the EU’s potential policies as too broad, suggesting they could force companies to reveal sensitive information and disrupt international research collaborations. SEMI Europe represents over 300 European semiconductor firms and institutions, including major players like ASML, Infineon, and STMicroelectronics.
In addition to outbound investment screening, the EU is advancing legislation to monitor foreign investments in critical European infrastructure and technology to address potential security risks.
Oxford Dynamics, based in Harwell, Oxfordshire, is developing a robot named Strider to operate in hazardous environments, such as chemical, biological, or nuclear incidents. The company has secured a £1m contract with the Ministry of Defence to design and supply this advanced robot by September.
Strider is equipped to handle tasks that are dangerous for humans, like retrieving contaminated objects and performing semi-autonomous activities. The robot is designed to navigate difficult terrains using infra-red, radar, and lidar systems, making it highly versatile in various scenarios, including those similar to the Novichok attack in Salisbury.
Mike Lawton, a director at Oxford Dynamics, envisions building thousands of Strider robots to benefit global safety. He emphasizes the importance of deploying machines instead of humans in life-threatening situations. The company also plans to enhance Strider with AVIS AI software, inspired by JARVIS from the Iron Man films, to further improve its capabilities.
Founder Shefali Sharma sees potential for adapting the technology to submarines and fighter jets, aiming to get these innovations into the hands of those who need them most. The initiative has been praised by Defra, highlighting the rapid progress from concept to a highly capable platform.
Alliant Energy has secured several power supply agreements with data centres in Iowa and Wisconsin, as confirmed during a recent post-earnings call. The rise in popularity of AI tools like OpenAI’s ChatGPT has spurred the demand for high-performance data centres, necessitating substantial electricity to process large volumes of data.
The company has been actively working to attract new customers in both states, successfully signing multiple deals with data centres. These agreements highlight Alliant Energy’s strategic efforts to expand its customer base amidst the growing data demands driven by advanced AI technologies.
Despite these new deals, Alliant Energy reported a decline in second-quarter profit, impacted by a settlement agreement related to its Interstate Power and Light unit’s retail electric rate review. This led to a pre-tax non-cash charge of $60 million in the second quarter.
The company’s quarterly adjusted profit for its utilities and corporate services segment fell by 13.8%, equating to 56 cents per share, compared to the previous year. Overall profit for the quarter ended June 30 was $87 million, down from $160 million a year earlier.
According to the New York Times, Meta is negotiating with actors such as Awkwafina, Judi Dench and influencers to use their voices for its MetaAI digital assistant. The social media giant is also in talks with comedian Keegan-Michael Key and other celebrities, with Hollywood’s top talent agencies involved in the negotiations.
On Wednesday, Meta announced its commitment to significant spending on AI infrastructure. Like many tech companies, Meta has invested billions in its data centres to leverage the generative AI boom.
While it’s unclear which celebrities might finalise deals, reports suggest Meta could pay millions in fees to secure their voices. Meta did not comment on these discussions.
AI-generated music faces strong opposition from musicians and major record labels over concerns about copyright infringement. Grammy-nominated artist Tift Merritt and other prominent musicians have criticised AI music platforms like Udio for producing imitations of their work without permission. Merritt argues that these AI-generated songs are not transformative but amount to theft, harming creativity and human artists.
Major record labels, including Sony, Universal, and Warner Music, have taken legal action against AI companies like Udio and Suno. These lawsuits claim that the companies have used copyrighted recordings to train their systems without proper authorisation, thus creating unfair competition by flooding the market with cheap imitations. The labels argue that such practices drain revenue from real artists and violate copyright laws.
The AI companies defend their technology, asserting that their systems do not infringe on copyrights and that their practices fall under ‘fair use.’ They liken the backlash to past industry fears over new technologies like synthesisers and drum machines. However, the record labels maintain that AI systems misuse copyrighted material to mimic famous artists without appropriate licenses, including Mariah Carey and Bruce Springsteen.
Why does this matter?
These legal battles echo other high-profile copyright cases involving generative AI, such as those against chatbots like OpenAI’s ChatGPT. The outcome of these cases could set significant precedents for using AI in creative industries, with courts needing to address whether AI’s use of copyrighted material constitutes fair use or infringement.