UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Towards Effectively Testing Sequence-to-Sequence models from White-Box Perspectives

Loading...
Thumbnail Image

Date

2024-05-22

Authors

Shao, Hanying

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

In the field of Natural Language Processing (NLP), which encompasses diverse tasks such as machine translation, question answering, and others, there has been notable advancement in recent years. Despite this progress, NLP systems, including those based on sequence-to-sequence models, confront various challenges. To tackle these, metamorphic testing methods have been employed across different NLP tasks. These methods entail task-specific adjustments at the token or sentence level. For example, in machine translation, this approach might involve replacing a single token in the source sentence to generate variants, whereas in question answering, adjustments might include altering or adding sentences within the question or context. By evaluating the system’s responses to these alterations, potential deficiencies in the NLP systems can be identified. Determining the most effective modifications, particularly, especially in terms of which tokens or sentences contribute to system instability, is an essential and continuous aspect of metamorphic testing research. To tackle this challenge, we introduce two white-box methods to detect sensitive tokens in the source text, alterations to which could potentially trigger errors in sequence-to-sequence models. The initial method, termed GRI, leverages GRadient Information for identifying these sensitive tokens, while the second method, WALI, utilizes Word ALignment Information to pinpoint the unstable tokens. We assess these approaches using a Transformer-based model for translation and question answering tasks, comparing them against datasets used by benchmark methods. When applying white-box approaches to machine translation testing and using them to generate test cases, the results show that both GRI and WALI can effectively improve the efficiency of the black-box testing strategies for revealing translation bugs. Specifically, our approaches can always outperform state-of-the-art automatic testing approaches from two aspects: (1) under a certain testing budget (i.e., number of executed test cases), both GRI and WALI can reveal a larger number of bugs than baseline approaches, and (2) when given a predefined testing goal (i.e., number of detected bugs), our approaches always require fewer testing resources (i.e., a reduced number of test cases to execute). Additionally, we explore the application of GRI and WALI in test prioritization and evaluate their performance in QA software testing. The results show that GRI can effectively prioritize test cases that are highly likely to generate bugs and achieve a higher percentage of fault detection given the same execution budget. WALI, on the other hand, exhibits results similar to baseline approaches, suggesting that while it may not enhance prioritization as significantly as GRI, it maintains a comparable level of effectiveness.

Description

Keywords

Neural network, Software Testing, Neural machine translation, Neural machine translation model testing

LC Keywords

Citation