1.School of Management, Xi'2.'3.an University of Architecture &4.Technology, Xi'5.an 710055, China 2. School of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China 3. Department of Electronic and Electrical Engineering, Brunel University London, Uxbridge UB8 3PH, U.K
The task assignment of distributed intrusion detection system (DIDS) in the edge computing environment with limited node performance is a typical resource-constrained task scheduling problem. To solve this problem, a DIDS low-load task scheduling scheme based on deep reinforcement learning is proposed. After establishing the evaluation model of detection engine performance and packet load, the task scheduling process is first described as a Markov decision process and the relevant space and value function of the model are established to find the optimal strategy for maintaining the low-load state of DIDS. To solve the problem of excessively large action space and high-dimensional continuity, a deep recurrent neural network is proposed to perform function fitting. Finally, in order to avoid the problem that excessive low load may cause the packet loss rate to increase, a balancing method of the two contradictory indicators of low load and packet loss rate is proposed and a problem model is established. The experimental results show that the proposed scheme enables DIDS to dynamically adjust the scheduling strategy during network changes, keeping the overall system load low, and the safety indicators are not significantly reduced.