Metrics for evaluating interface explainability models for cyberattack detection in IoT data
In Complex computational ecosystems 2023 (CCE’23)
Abstract
The importance of machine learning (ML) in detecting cyberattacks lies in its ability to efficiently process and analyze large volumes of IoT data, which is critical in ensuring the security and privacy of sensitive information transmitted between connected devices. However, the lack of explainability of ML algorithms has become a significant concern in the cybersecurity community. Therefore, explainable techniques are developed to make ML algorithms more transparent, thereby improving trust in attack detection systems by its ability to allow cybersecurity analysts to understand the reasons for model predictions and to identify any limitation or error in the model. One of the key artifacts of explainability is interface explainability models such as impurity and permutation feature importance analysis, Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP). However, these models are not able to provide enough quantitative information (metrics) to build complete trust and confidence in the explanations they generate. In this paper, we propose and evaluate metrics such as reliability and latency to quantify the trustworthiness of the explanations and to establish confidence in the model’s decisions to accurately detect and explain cyberattacks in IoT data during the ML process.