A Quantitative and Qualitative Empirical Evaluation of a Test Refactoring Tool
MetadataShow full item record
Reducing the gap between what practitioners want vs. what researchers assume they want is one of the vital challenges in software projects. When it comes to software tools, many people develop tools, but only some tools end up being useful to developers. Since this problem is more attached to short-term industry imperatives, several new software development practices and methodologies have been established to get frequent feedback from potential clients and adjust the project based on their feedback, to address this issue. We thought that agile-style techniques as in industry could be transplanted to evaluate the usefulness of tools in software engineering and programming languages research systems. JTestParametrizer is an existing refactoring tool that aims to automatically refactor the method-scope renamed clones in test suites. This research aimed to use different practices and methodologies formulated on the aforementioned concept to evaluate, modify, and extend the JTestParametrizer tool. First, we ran the tool on 18 benchmarks that we picked for our benchmark suite. Then by studying the feedback that we got from quantitative results, we detected and fixed some conceptual and non-conceptual bugs in the tool. Next, we developed questionnaires and used manual assessments and pull requests submitted to developers to solicit feedback on the quality of the tool. Then after studying this feedback, we modified the tool and added new configurations to adjust the tool to the feedback. Furthermore, we used a technique similar to the Minimum Viable Product technique in industry to collect feedback on potential features for JTestParametrizer before actually implementing them. We did this by manually applying the effect of different potential features that the tool could have on cases used for pull requests. By studying feedback from these manually modified pull requests, we determined the factors that the practitioners care about the most in the context of refactoring unit tests, allowing us to formulate and support hypotheses about suitable features to implement in JTestParametrizer next.
Cite this version of the work
Aliasghar Iman (2021). A Quantitative and Qualitative Empirical Evaluation of a Test Refactoring Tool. UWSpace. http://hdl.handle.net/10012/17640