Theses
Permanent URI for this collectionhttps://hdl.handle.net/1969.6/1140
Browse
Browsing Theses by Subject "adversarial attacks"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item SECURENN: Defeating adversarial neural network attacks with moving target defense and genetic algorithms(2023-05) Romero, Laila Maria; Rubio-Medrano, Carlos; Wang, Wenlu; King, ScottNeural Networks (NNs) have become a critical part of Artificial Intelligence due to their reputation of producing highly accurate outputs with minimal human assistance. NNs are used in various diverse implementations from housing market predictors to medical imaging. Their swift increase in importance and incorporation into our lives have rendered them valuable targets to Adversarial Attacks. Adversarial Attacks are malicious actions aimed to undermine NN model performance, cause misbehavior, and acquire protected information. NNs are used to run many state-of-the-art image classification systems therefore, attacks could be dangerous to the property, health and safety of their users. The most common and successful attacks are gradient based attacks on Image Classification Neural Networks. The defense strategies in existence, such as Adversarial Training, fall short on their ability to protect models against more complex attacks due to their susceptibility to degrade generalization ability in models. This work proposes SecureNN, a defense framework for image classification NNs to increase overall robustness of the models against white-box untargeted Adversarial Attacks. Through the combination of the well- established cybersecurity and Machine Learning techniques of Moving Target Defense, Genetic Algorithm, and Ensemble Learning, SecureNN is able reduce the degraded generalization ability seen in most defense methods as well as minimize the advantages white-box attacks have without incurring in significant cost on the accuracy and speed of the model. SecureNN has been tested extensively on the following four NN architecture types: CNN, ResNet50, Inception, and Inception-ResNet and trained with three common datasets of MNIST, ImageNet and Cifar-10. Each model architecture and dataset were tested against the four highest error rate gradient-based attacks of Fast Gradient Sign Method, Basic Iterative Method, Projected Gradient Descent and Carlini Wagner. The average of 1.5% higher accuracy rates than Adversarial Training and 49.6% higher accuracy rates than Undefended Models exhibited through the experimental phase of our framework substantiates SecureNN’s potential as a defense mechanism effective in increasing NN robustness.