05-12-2024 |
Automated Security Code Reviews: Assessing the State of the Art
The goal of the project is to evaluate models for automating code reviews. The research is structured into three main stages. First, a manual review of a set of code reviews will be conducted to identify those related to security patches and create a labeled dataset. Next, Large Language Models (LLMs) will be used to generate and evaluate reviews and corrections based on the labeled data. Finally, a comprehensive comparison will be conducted between the labeled and generated data to measure the performance of current models against LLMs.
Prerequisites:
IIC2233
Evaluation method: Nota 1-7, with 0/1 available vacants |
Mentor(s): Open in the plataform |