Ranking warnings from multiple source code static analyzers via ensemble learning

Athos Ribeiro, Paulo Meirelles, Nelson Lago, Fabio Kon
Proceedings of the 15th International Symposium on Open Collaboration (OpenSym'2019)

While there is a wide variety of both open source and proprietary source code static analyzers available in the market, each of them usually performs better in a small set of problems, making it hard to choose one single tool to rely on when examining a program looking for bugs in the source code. Combining the analysis of different tools may reduce the number of false negatives, but yields a corresponding increase in the absolute number of false positives (which is already high for many tools). A possible solution, then, is to filter these results to identify the issues least likely to be false positives. In this study, we post-analyze the reports generated by three tools on synthetic test cases provided by the US National Institute of Standards and Technology. In order to make our technique as general as possible, we limit our data to the reports themselves, excluding other information such as change histories or code metrics. The features extracted from these reports are used to train a set of decision trees using AdaBoost to create a stronger classifier, achieving 0.8 classification accuracy (the combined false positive rate from the used tools was 0.61). Finally, we use this classifier to rank static analyzer alarms based on the probability of a given alarm being an actual bug in the source code.


bibtex
@inproceedings{inproceedings7cc98d74,
title = {Ranking warnings from multiple source code static analyzers via ensemble learning},
author = {Ribeiro, Athos and Meirelles, Paulo and Lago, Nelson and Kon, Fabio},
year = {2019},
doi = {10.1145/3306446.3340828},
publisher = {ACM},
booktitle = {International Symposium on Open Collaboration - OpenSym}
}