UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Security Evaluations of GitHub's Copilot

Loading...
Thumbnail Image

Date

2023-08-11

Authors

Asare, Owura

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Code generation tools driven by artificial intelligence have recently become more popular due to advancements in deep learning and natural language processing that have increased their capabilities. The proliferation of these tools may be a double-edged sword because while they can increase developer productivity by making it easier to write code, research has shown that they can also generate insecure code. In this thesis, we perform two evaluations of one such code generation tool, GitHub's Copilot, with the aim of obtaining a better understanding of their strengths and weaknesses with respect to code security. In our first evaluation, we use a dataset of vulnerabilities found in real world projects to compare how Copilot's security performance compares to that of human developers. In the set of (150) samples we consider, we find that Copilot is not as bad as human developers but still has varied performance across certain types of vulnerabilities. In our second evaluation, we conduct a user study that tasks participants with providing solutions to programming problems that have potentially vulnerable solutions with and without Copilot assistance. The main goal of the user study is to determine how the use of Copilot affects participants' security performance. In our set of participants (n=21), we find that access to Copilot accompanies a more secure solution when tackling harder problems. For the easier problem, we observe no effect of Copilot access on the security of solutions. We also capitalize on the solutions obtained from the user study by performing a preliminary evaluation of the vulnerability detection capabilities of GPT-4. We observe mixed results of high accuracies and high false positive rates, but maintain that language models like GPT-4 remain promising avenues for accessible, static code analysis for vulnerability detection. We discuss Copilot's security performance in both evaluations with respect to different types of vulnerabilities as well its implications for the research, development, testing, and usage of code generation tools.

Description

Keywords

copilot, security, code generation

LC Keywords

Citation