Research on adversarial attack and robustness of deep neural networks

Abstract:Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. We propose a specific type of adversarial attack that can cheat classifiers by significant changes. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence… Continue reading Research on adversarial attack and robustness of deep neural networks

Published
Categorized as seminar