TY - JOUR AU1 - Tuna, Omer Faruk AU2 - Catak, Ferhat Ozgur AU3 - Eskil, M. Taner AB - Deep neural network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, referred to as adversarial examples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this study, we propose a new architecture and enhance a recently proposed technique by which we can restore adversarial samples back to their original class manifold. We leverage the use of several uncertainty metrics obtained from Monte Carlo dropout (MC Dropout) estimates of the model together with the model’s own loss function and combine them with the use of defensive distillation technique to defend against these attacks. We have experimentally evaluated and verified the efficacy of our approach on MNIST (Digit), MNIST (Fashion) and CIFAR10 datasets. In our experiments, we showed that our proposed method reduces the attack’s success rate lower than 5% without compromising clean accuracy. TI - TENET: a new hybrid network architecture for adversarial defense JF - International Journal of Information Security DO - 10.1007/s10207-023-00675-1 DA - 2023-08-01 UR - https://www.deepdyve.com/lp/springer-journals/tenet-a-new-hybrid-network-architecture-for-adversarial-defense-i4wJVFo0bn SP - 987 EP - 1004 VL - 22 IS - 4 DP - DeepDyve ER -