Nvidia accelerates chip design with AI agents
AI models and LLMs are transforming how Nvidia designs GPUs and CPUs.

Nvidia is revolutionising its chip design process by leveraging large language models (LLMs) and autonomous AI agents. These innovations are being used to speed up the development of GPUs, CPUs, and networking chips, significantly enhancing design quality and productivity. The models include prediction, optimisation, and automation tools, which help engineers improve designs, generate code, and debug issues more efficiently.
The company has trained an LLM specifically on Verilog, a hardware description language, to accelerate the creation of its systems. This model assists in speeding up the design and verification processes while automating manual tasks, supporting Nvidia’s goal of maintaining a yearly product release cycle. As Nvidia continues to develop increasingly complex architectures, such as the Blackwell architecture, these AI tools are vital in meeting the challenges of next-generation designs.
At the Hot Chips conference in the US, Mark Ren, Nvidia’s director of design automation research, will provide insights into these AI models. He will highlight their applications in chip design, focusing on how agent-based systems powered by LLMs transform the field by autonomously completing tasks, interacting with designers, and learning from experience.
The use of AI agents for tasks like timing report analysis and cell cluster optimisation has already gained recognition, with a recent project winning best paper at the IEEE International Workshop on LLM-Aided Design. Nvidia’s advancements demonstrate the critical role of AI in pushing the boundaries of chip design.