Exploring the Fundamentals of Mutations in Deep Neural Networks
Abstract
The increasing popularity of deep neural networks (DNNs) has led to the adaptation of mutation analysis from classical software development to the machine learning (ML) paradigm. However, determining what to mutate and to what extent remains a challenge. To this aim, two questions are of importance: (i) which ML artifacts can be modified to generate acceptable mutations?, and (ii) what extent of change in a specific metric qualifies as a useful mutant?
Addressing the first query, current research offers contradictory perspectives on suitable ML artifacts for mutations. In this paper, we argue that the ML development process resembles formal method-based development, drawing parallels between iterative refinement in ML and in formal specification-based development. This framing supports the injection of bugs into the training program, training data, and trained models. Regarding the second inquiry, existing ML mutant selection criteria focus on semantic aspects like prediction accuracy and error rates, neglecting the magnitude of syntactic changes. This oversight challenges the validity of foundational hypotheses for mutation analysis in ML, such as the competent programmer hypothesis and coupling effect. Our observations support addressing these fundamental tasks to enhance the realism of mutation analysis in ML contexts. We have outlined plans to tackle these identified challenges.
Keywords:
mutation analysis, deep learning, formal methods, software testing
Document Type:
Articles in Conference Proceedings
Booktitle:
MODELS Companion '24: Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems
Pages:
227 - 233
Month:
10
Year:
2024
DOI:
https://doi.org/10.1145/3652620.3687426
Bibtex
2024 © Software Engineering For Distributed Systems Group