Chen, An2025-05-152025-05-152025-05-152025-05-14https://hdl.handle.net/10012/21735Although logging practices have been extensively explored in conventional software systems, there remains a lack of understanding of how logging is applied in CUDA-based deep learning (DL) systems, despite their growing adoption in practice. In this paper, we conduct an empirical study to examine the characteristics and rationales of logging practices in these systems. We analyze logging statements from 33 CUDA-based open-source DL projects, covering both general-purpose logging libraries and DL-specific logging frameworks. For each type, we identify the development or execution phases in which the logs are used, investigate the reasoning behind their usage, and the relationship between the two different types of logging. Our quantitative analysis reveals that the majority of logging statements occur during the model training phase, with significant usage also in the model loading phase and model evaluation/validation phase. We also observe that logging is predominantly used for monitoring purposes and tracking model-related information. Furthermore, we found that complementary is the most prevalent relationship between general and DL-specific logging. Our findings not only shed light on current logging practices in CUDA-based DL development but also provide practical guidance on when to use DL-specific versus general-purpose logging, helping practitioners make more informed decisions and guiding the evolution of DL-focused logging tools to better support developer needs.enlogging practicesdeep learning systemsmining software repositoriesAn Empirical Study of Logging Practice in CUDA-based Deep Learning SystemsMaster Thesis