Adversarial Machine Learning (ML) aims to fool the trained Machine Learning model with malicious inputs to test its robustness. Adversarial Examples (AE) generated to target classification task on relatively simple ML models can be transferred to other complex ML models. There are different threat models based on the level of knowledge of the attacker: White-box, Black-box and their combined variations. Defenses against these AE have been extensively researched: Adversarial Training, Defensive Distillation, etc. However, all of these counter-defenses fail to completely protect ML models against AEs.
ML-based intrusion detection system (IDS) in industrial networks are on the rise to counter ever-evolving cyber threats with malicious intent to harm industrial control systems (ICS). IDS in ICS can be classified based on detection techniques and the characteristic of ICS: protocol analysis-based, traffic mining-based, and control process analysis-based.
ML-based intrusion detection can become more robust when they are able to find the weakness within their models with Adversarial ML. At Fraunhofer IOSB, within Research Group Securely Networked Systems of Department Information Management and Production Control (ILT) we aim to build a platform to generate AEs for Network IDS.
- Explore methods to generate AEs within Industrial Network Security domain
- Develop collection of AEs for different IDS types and evaluate their effectiveness