Show simple item record

dc.contributor.authorIman, Aliasghar
dc.date.accessioned2021-10-18 15:05:14 (GMT)
dc.date.available2021-10-18 15:05:14 (GMT)
dc.date.issued2021-10-18
dc.date.submitted2021-10-13
dc.identifier.urihttp://hdl.handle.net/10012/17640
dc.description.abstractReducing the gap between what practitioners want vs. what researchers assume they want is one of the vital challenges in software projects. When it comes to software tools, many people develop tools, but only some tools end up being useful to developers. Since this problem is more attached to short-term industry imperatives, several new software development practices and methodologies have been established to get frequent feedback from potential clients and adjust the project based on their feedback, to address this issue. We thought that agile-style techniques as in industry could be transplanted to evaluate the usefulness of tools in software engineering and programming languages research systems. JTestParametrizer is an existing refactoring tool that aims to automatically refactor the method-scope renamed clones in test suites. This research aimed to use different practices and methodologies formulated on the aforementioned concept to evaluate, modify, and extend the JTestParametrizer tool. First, we ran the tool on 18 benchmarks that we picked for our benchmark suite. Then by studying the feedback that we got from quantitative results, we detected and fixed some conceptual and non-conceptual bugs in the tool. Next, we developed questionnaires and used manual assessments and pull requests submitted to developers to solicit feedback on the quality of the tool. Then after studying this feedback, we modified the tool and added new configurations to adjust the tool to the feedback. Furthermore, we used a technique similar to the Minimum Viable Product technique in industry to collect feedback on potential features for JTestParametrizer before actually implementing them. We did this by manually applying the effect of different potential features that the tool could have on cases used for pull requests. By studying feedback from these manually modified pull requests, we determined the factors that the practitioners care about the most in the context of refactoring unit tests, allowing us to formulate and support hypotheses about suitable features to implement in JTestParametrizer next.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.titleA Quantitative and Qualitative Empirical Evaluation of a Test Refactoring Toolen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentElectrical and Computer Engineeringen
uws-etd.degree.disciplineElectrical and Computer Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorLam, Patrick
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages