Calculation and Optimization of Thresholds for Sets of Software Metrics
Abstract
Software metrics are an established means for assessment. With metrics, it is possible to describe the often abstract product “software”, by assigning numbers to the software’s attributes, e.g., the size and the complexity. For software quality assessment it is often not sufficient to consider a single attribute. Instead, a holistic view on the software is required. To achieve this, sets of software metrics that cover the relevant attributes are used. For the interpretation of the metric values, thresholds are required. If a metric violates its threshold, the metric value is critical and it indicates a potential problem of the measured software artifact. When a set of metrics is used for quality assessment, this means that a measured artifact is critical, if one of the metrics that is part of the set violates its threshold.
In this talk, we present a method for the optimization of software metric sets with respect to their efficiency. This means, that a smaller metric set is determined, that replicates the classification into uncritical and critical software artifacts produced by the original metric set. The method is data-driven and is based on machine-learning. Furthermore, we will show that this method can not only be used for the optimization of metric set efficiency, but also to reduce the complexity of a classifier, i.e., replacing an arbitrary classifier with one that is based on thresholds.
Keywords:
Software Metrics, Thresholds, Machine Learning
Document Type:
Presentations
Organization:
Nanjing University
Address:
China
Month:
4
Year:
2011
Bibtex
2024 © Software Engineering For Distributed Systems Group