Rubio-Medrano, CarlosRomero, Laila Maria2023-08-072023-08-072023-05https://hdl.handle.net/1969.6/96901A Thesis Submitted In Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE in Computer Science from Texas A&M University-Corpus Christi.Neural Networks (NNs) have become a critical part of Artificial Intelligence due to their reputation of producing highly accurate outputs with minimal human assistance. NNs are used in various diverse implementations from housing market predictors to medical imaging. Their swift increase in importance and incorporation into our lives have rendered them valuable targets to Adversarial Attacks. Adversarial Attacks are malicious actions aimed to undermine NN model performance, cause misbehavior, and acquire protected information. NNs are used to run many state-of-the-art image classification systems therefore, attacks could be dangerous to the property, health and safety of their users. The most common and successful attacks are gradient based attacks on Image Classification Neural Networks. The defense strategies in existence, such as Adversarial Training, fall short on their ability to protect models against more complex attacks due to their susceptibility to degrade generalization ability in models. This work proposes SecureNN, a defense framework for image classification NNs to increase overall robustness of the models against white-box untargeted Adversarial Attacks. Through the combination of the well- established cybersecurity and Machine Learning techniques of Moving Target Defense, Genetic Algorithm, and Ensemble Learning, SecureNN is able reduce the degraded generalization ability seen in most defense methods as well as minimize the advantages white-box attacks have without incurring in significant cost on the accuracy and speed of the model. SecureNN has been tested extensively on the following four NN architecture types: CNN, ResNet50, Inception, and Inception-ResNet and trained with three common datasets of MNIST, ImageNet and Cifar-10. Each model architecture and dataset were tested against the four highest error rate gradient-based attacks of Fast Gradient Sign Method, Basic Iterative Method, Projected Gradient Descent and Carlini Wagner. The average of 1.5% higher accuracy rates than Adversarial Training and 49.6% higher accuracy rates than Undefended Models exhibited through the experimental phase of our framework substantiates SecureNN’s potential as a defense mechanism effective in increasing NN robustness.94 pagesen-USThis material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with its source. All rights are reserved and retained regardless of current or future development or laws that may apply to fair use standards. Permission for publication of this material, in part or in full, must be secured with the author and/or publisher.This material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with its source. All rights are reserved and retained regardless of current or future development or laws that may apply to fair use standards. Permission for publication of this material, in part or in full, must be secured with the author and/or publisher.adversarial attacksartificial intelligencecybersecuritydefensemachine learningneural networksSECURENN: Defeating adversarial neural network attacks with moving target defense and genetic algorithmsText