Defeating adversarial attacks with MTD and genetic algorithm

Date

2022-04

Authors

Romero, Laila
Rubio-Medrano, Carlos

Journal Title

Journal ISSN

Volume Title

Publisher

DOI

Abstract

Neural Networks (NNs) have become an integral part of machine learning, specifically in the areas of Pattern Recognition, Decision Making, and Image Detection. Due to it’s increase in importance and usage, NNs have become targets to adversarial attacks. The most common and successful attacks are gradient based attacks such as Data Poisoning and Backdoor Attacks on Image Detection Neural Networks. The defense strategies in existence rely on adversarial training which has proven to still be susceptible to attacks. NNs are used to run many state-of-the-art image classification systems therefore attacks could be dangerous. This project aims to identify and create a defense strategy for image detection NNs against white-box data poisoning adversarial attacks using aspects of moving target defense and genetic algorithm without incurring significant cost on the accuracy and speed of the model. The idea of the project is to use a trained NN as a template for other NNs. The amount of NNs in the pool will be known after experimentation. The inputs will be fed into a pool of NNs and the average of the outputs will be used. After each input, a portion of the pool with the highest accuracy will be selected for mutation and reproduction of new NNs, a strategy based on Genetic Algorithm. The new NNs will replace the selected NNs and the remaining NNs will be new NNs derived from the trained model. Throughout this process, a quarter of the NN pool will be randomly selected to have a higher weight, affecting the final output of the system. This defense system will be tested against four of the most effective gradient-based attacks, Fast Gradient Sign Model, Basic Iterative Method, Projected Gradient Descent, and Carlini Wagner Attack. This is an ongoing project and results will affect the amount of NNs in the pool.

Description

Keywords

machine learning, artificial intelligence, cybersecurity

Sponsorship

Rights:

Attribution-NoDerivatives 4.0 International

Citation