Debugging Machine Learning Models
Abstract
Identifying the root cause of unexpected behaviour in a machine learning model/system is challenging and very complex. This is because the current implementation of machine learning techniques provides developers with little insight into why a particular prediction is made. Hence, when a machine learning model/system produces an unexpected behaviour, it becomes very challenging to discover the root cause of the failure as it can come from different sources including bugs in the code, input data and incorrect setting of parameters. Additionally, this increases the cost involved in performing regression testing or retraining a machine learning model upon identifying a discrepancy in its performance.
This presentation presents an approach that seeks to advance the state-of-the-art in debugging machine learning models/systems. Our approach is based on Pearl’s theory of causation which seeks to identify the root cause of unexpected behaviour in a machine learning model/system by estimating the impact of each perceived cause on the model.
Our approach contributes to the advancement of the state-of-the-art in debugging machine learning models/systems as it will support a wider class of causes (ie: data, code and parameter settings). Also, it will reduce the manual effort developers expend in thinking about the possible root causes of a failure in a machine learning model which is more often than not a tedious and time-consuming task.
Document Type:
Presentations
Howpublished:
presented at 9th User Conference on Advanced Automated Testing (UCAAT)
Month:
9
Year:
2022
Bibtex
2024 © Software Engineering For Distributed Systems Group