Whodunit: Classifying Code as Human Authored or GPT-4 generated- A case study on CodeChef problems

dc.contributor.authorIdialu, Oseremen Joy
dc.contributor.authorMathews, Noble Saji
dc.contributor.authorMaipradit, Rungroj
dc.contributor.authorAtlee, Joanne M.
dc.contributor.authorNagappan, Meiyappan
dc.date.accessioned2024-03-07T15:16:50Z
dc.date.available2024-03-07T15:16:50Z
dc.date.issued2024-04-15
dc.description.abstractArtificial intelligence (AI) assistants such as GitHub Copilot and ChatGPT, built on large language models like GPT-4, are revolutionizing how programming tasks are performed, raising questions about whether code is authored by generative AI models. Such questions are of particular interest to educators, who worry that these tools enable a new form of academic dishonesty, in which students submit AI-generated code as their work. Our research explores the viability of using code stylometry and machine learning to distinguish between GPT-4 generated and human-authored code. Our dataset comprises human-authored solutions from CodeChef and AI-authored solutions generated by GPT-4. Our classifier outperforms baselines, with an F1-score and AUC-ROC score of 0.91. A variant of our classifier that excludes gameable features (e.g., empty lines, whitespace) still performs well with an F1-score and AUC-ROC score of 0.89. We also evaluated our classifier with respect to the difficulty of the programming problem and found that there was almost no difference between easier and intermediate problems, and the classifier performed only slightly worse on harder problems. Our study shows that code stylometry is a promising approach for distinguishing between GPT-4 generated code and human-authored code.en
dc.identifier.urihttps://2024.msrconf.org/
dc.identifier.urihttp://hdl.handle.net/10012/20384
dc.language.isoenen
dc.publisherMining Software Repositoriesen
dc.relation.ispartofseries21st International Conference on Mining Software Repositories;
dc.relation.urihttps://zenodo.org/records/10153319en
dc.subjectcode stylometryen
dc.subjectchatgpten
dc.subjectAI codeen
dc.subjectGPT-4 generated codeen
dc.subjectauthorship profilingen
dc.subjectsoftware engineeringen
dc.titleWhodunit: Classifying Code as Human Authored or GPT-4 generated- A case study on CodeChef problemsen
dc.typeConference Paperen
dcterms.bibliographicCitationIdialu, O.J., Mathews, N.S., Maipradit, R., Atlee, J.M. & Nagappan, M. (2024). Whodunit: Classifying Code as Human Authored or GPT-4 generated- A case study on CodeChef problems. 21st International Conference on Mining Software Repositories. April 15-16 2024. Lisbon, Portugal.,en
uws.contributor.affiliation1Faculty of Mathematicsen
uws.contributor.affiliation2David R. Cheriton School of Computer Scienceen
uws.peerReviewStatusRevieweden
uws.scholarLevelFacultyen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Detecting_AI_Generated_Code.pdf
Size:
1.27 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.47 KB
Format:
Item-specific license agreed upon to submission
Description: