Large Language Models (LLMs) have demonstrated impressive code generation capabilities across a wide range of programming languages, including code generation for hardware description languages (HDLs) [1][2]. In this project, we will investigate the capabilities of LLMs beyond pure code generation: focusing on how LLMs can understand and accomplish RTL design tasks on real chip designs. The thesis aims to enhance the ability of open-source LLMs for this purpose as well as to understand the strengths and weaknesses of LLM-generated RTL code.
This thesis is a collaboration between IIS and Chipmind AG, and a framework with the necessary datasets to get you started will be provided.
In this thesis, you will explore the limitations of open-source LLMs that succeed or fail in specific RTL tasks and how various fine-tuning algorithms can improve performance.