The Defense Advanced Research Projects Agency (DARPA) announced the finalists for its AI Cyber Challenge (AIxCC) at DEF CON, a competition that rewards teams for training large language models (LLMs) to identify and fix vulnerabilities in open-source code. BigTech companies like Google, Microsoft, Anthropic, and OpenAI supported participants with AI model credits. The challenge saw about 40 teams submit projects, which were tested on their ability to detect and remediate injected vulnerabilities in open-source coding projects.
Experts say that generative AI can help automate the detection and patching of security flaws in code, and this development can be critical as unsophisticated yet harmful cyberattacks increasingly target critical facilities such as hospitals and water systems. Automating basic cybersecurity practices, such as scanning and fixing code bugs, could significantly reduce these incidents.
Despite running these tests in a controlled, sandboxed environment, the semifinalists’ LLM projects managed to discover 22 unique vulnerabilities and automatically patch 15 of them. DARPA, which has invested over $2 billion in AI research since 2018, plays a unique role in cybersecurity innovation: it created a mock city under cyberattack within DEF CON, attracting over 12,500 visitors. The seven finalist teams will compete in the challenge’s final round at next year’s DEF CON conference, with government officials hoping these AI tools will soon be applied to protect real-life critical infrastructure.
Anne Neuberger, the Biden administration’s deputy national security advisor for cyber and emerging technology, emphasised the goal of using AI for defense as swiftly as adversaries use it for offense. The White House is already collaborating with the Department of Energy to explore deploying these AI tools within the energy sector and hopes to eventually apply them to proprietary company code.
Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.
The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.
Why does this matter?
The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.
BMW is exploring the potential of AI-powered humanoid robots to assist in car manufacturing. The German automaker has partnered with California-based Figure to trial their advanced humanoid robot, Figure 02, at the South Carolina BMW Group Plant Spartanburg. The robot’s ability to handle complex tasks with human-like dexterity was put to the test during the trial, focusing on whether these machines could safely integrate into production lines.
Footage from the trial showcases the robot’s capabilities, including its ability to walk, grasp, and coordinate two-handed tasks. The Figure 02 is equipped with a powerful processor, advanced sensors, and human-scale hands, making it suitable for physically demanding and repetitive tasks in the factory. The combination of mobility and dexterity positions the robot as a potential asset in challenging work environments.
Why does this matter?
BMW highlights the significance of these developments in robotics, noting the promise they hold for the future of production. While the company has not yet committed to incorporating AI robots into its workforce, the rapid advancement of AI suggests that their use in manufacturing may soon become a reality.
The trial serves as an early step in assessing the feasibility of humanoid robots in production settings, with BMW keen to stay at the forefront of this technological evolution. The company is carefully evaluating the results to determine the best possible applications for these robots in the automotive industry.
Japanese startup Sakana AI has unveiled The AI Scientist, an advanced system capable of fully autonomous scientific research. Collaborating with the University of Oxford’s Foerster Lab and experts from the University of British Columbia, Sakana AI has developed a groundbreaking tool that enables large language models (LLMs) to generate research ideas, execute experiments, and draft scientific papers independently.
The AI Scientist offers a significant leap forward in automated scientific discovery. It utilises frontier LLMs to not only write code and visualise results but also to ensure the quality of its output through a simulated peer-review process. This innovation marks a new era in how scientific research could be conducted.
Each research paper generated by the AI Scientist costs less than £12, making it an affordable option for researchers. An automated reviewer has been designed to evaluate the generated papers, further streamlining the research process.
In addition to the AI Scientist, Sakana AI has also introduced EvoSDXL-JP, a model capable of generating Japanese-style images ten times faster. Available on HuggingFace, it serves as a tool for research and educational purposes.
Why does it matter?
If AI can draft scientific papers as SakanaAI has shown, numerous questions will be opened, including: What will the future of scientific publications be? What will be the future of science? How can humans compete with machine intelligence? These questions are not just conceptual and philosophical.
They impact the core of the scientific world. At Diplo, we have been developing the KaiZen publishing approach, which combines just-in-time AI writing with more reflective human inputs.
Companies from US and China are leading the race in AI research, with Alphabet, the parent company of Google, at the forefront. A recent study from Georgetown University revealed that Alphabet has published the most frequently cited AI academic papers over the past decade. Seven of the top ten positions are held by US companies, including Microsoft and Meta, reflecting their dominance in the field.
Chinese firms are not far behind, with Tencent, Alibaba, and Huawei securing spots within the top ten. These companies have shown remarkable growth, particularly in the number of papers accepted at major conferences. Huawei has outpaced its competitors with a 98.2% annual growth rate in this area, followed by Alibaba at 53.5%.
The competition extends beyond academic publications to patents. Baidu, a leading Chinese tech firm, topped the list of patent applications with over 10,000 submissions from 2013 to 2023. Baidu’s growth has been particularly striking, with a 228% increase in patent applications year-on-year in 2020. US companies hold three spots in the top ten for patents, with IBM making the list.
Samsung Electronics is the only Korean company to make the top 100, ranking No. 14 for highly cited AI articles and No. 4 for patents. However, Samsung’s growth in these areas has been slower compared to other global leaders, with modest increases in conference paper acceptances in recent years.
Tokyo has introduced a new AI-driven system aimed at improving the speed and efficiency of disaster response efforts. The technology, developed by Hitachi Ltd., leverages high-altitude cameras to detect fires and building collapses in real-time, ensuring that emergency services receive critical information without delay.
The system is designed to automatically identify signs of disasters, such as structural collapses and fires, and immediately notify the police, fire department, and Japan’s self-defense forces. This rapid communication is intended to streamline response efforts, potentially saving lives during emergencies.
High-resolution cameras have been strategically installed at various locations across Tokyo, including the metropolitan government buildings, to support the system’s operations. The initiative is particularly significant given Japan’s ongoing concerns about the threat of a ‘mega’ earthquake.
Tokyo’s investment in AI technology reflects the city’s commitment to bolstering its disaster preparedness, aiming to safeguard its residents by ensuring quicker and more effective emergency responses.
Prime Minister of Pakistan, Shehbaz Sharif, has highlighted the significant role of provinces in supporting students through the National Finance Commission (NFC), which allocates 60 percent of its shares to them. Speaking at the International Youth Day event, he pointed out that provinces now have ample resources to invest in educational initiatives, including the distribution of laptops, a practice he championed as Chief Minister of Punjab.
The Prime Minister announced that the federal government would distribute smartphones to one million high-achieving students, while provinces will continue to provide laptops. Emphasising the importance of technology in education, he underlined that equipping the new generation with modern tools is essential for the country’s future. AI, he noted, is a key area for growth.
Citing China’s success through technological advancements, the Prime Minister of Pakistan vowed to make all necessary resources available to students. He also reflected on accumulated debt over the past 70 years, contrasting it with the vision of the country’s founders. The speech included a call to action for a long-term educational programme to be launched after 14 August.
Sharif’s remarks stressed the need to bridge the gap between different social classes in Pakistan, with a focus on ensuring that every child, regardless of background, has access to the latest technology. He expressed hope that these initiatives would lead to a brighter future for the nation.
Huawei Technologies is on the brink of releasing a new AI chip, Ascend 910C, to challenge Nvidia’s dominance in the Chinese market. The company has made significant strides despite US sanctions, with Chinese internet firms and telecom operators recently testing the processor.
Huawei claims that the Ascend 910C rivals Nvidia’s H100, a powerful AI chip that has been unavailable in China.
Why does this matter?
The development signals Huawei’s ongoing efforts to circumvent restrictions and bolster its position in the AI sector.
The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.
The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.
Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.
The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.
US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growing electricity demands driven by the rapid expansion of data centres powering AI. The Department of Energy anticipates that electricity demand will double by midcentury due to factors such as manufacturing growth, electrification of vehicles, and AI development. Despite concerns from local communities about the strain on resources, Granholm remains confident that clean energy sources will be sufficient to handle this increased demand, bolstered by significant investments under recent legislation.
Granholm highlighted the strong growth in renewable energy investments, predicting the deployment of over 60 gigawatts of clean energy and storage capacity this year alone. However, she acknowledged the immediate challenge of building transmission lines to connect data centers to these clean power sources. The Department of Energy is working to expedite the permitting process for such projects, with public and private investments playing a key role in expanding infrastructure.
The growth of AI has put many renewable energy goals to a test. Collaborations between tech giants such as Google and energy departments are emerging as a solution to meet the surging demand. For example, a recent partnership in Virginia between Iron Mountain and the state’s energy department will introduce large-scale batteries to store renewable energy for data centers. Granholm suggested that such initiatives could turn the demand from data centers into a catalyst for further investment in renewable energy.
The United States DOE is also researching ways to improve efficiency in data centers, aiming to help tech companies increase computing power while managing energy consumption. Granholm, after recent meetings with tech and utility leaders, hinted at upcoming major announcements that would reinforce America’s leadership in technology and innovation.