Faculty of Mathematics, Physics
and Informatics
Comenius University Bratislava

Doctoral colloquium - Peter Anthony (26.2.2024)

Monday 26.2.2024 at 13:10, Lecture room I/9

21. 02. 2024 15.27 hod.
By: Damas Gruska

Peter Anthony:
Tailoring Logic Explained Network for a Robust Explainable Malware Detection

The field of malware research faces persistent challenges in adopting machine learning solutions due to issues of low generalization and a lack of explainability. While deep learning, particularly artificial neural networks, has shown promise in addressing the generalization problem, their inherent black-box nature poses challenges in providing explicit explanations for predictions. On the other hand, interpretable machine learning models, such as linear regression and decision trees, prioritize transparency but often sacrifice performance. In this work, to address the imperative needs of robustness and explainability in cybersecurity, we propose the application of a recently proposed interpretable-by-design neural network - Logic Explain Network (LEN) to the complex landscape of malware detection. We investigated the effectiveness of LEN in discriminating malware and providing meaningful explanations and evaluate the quality of the explanations over increasing feature size based on fidelity and other standard metrics. Additionally we introduce an improvement on the simplification approach for the global explanation. Our analysis were carried out using static malware features provided by the EMBER dataset. The experimental results shows LEN’s discriminating performance is competitive with Blackbox deep learning models. LEN's Explanations demonstrated high fidelity, indicating genuine reflections of the model's inner workings. However, a notable trade-off between explanation fidelity and compactness is identified.