Daniel
May 11, 2021

--

Evolution Strategies can be used to create adversarial examples to fool image classifiers in a black-box setting and three variants are compared in a recently published paper. Compared to a white-box setting where the adversary has full access to the neural network and knows its architecture and parameters, a black-box setting is more realistic. The two population-less variants (1+1)-ES and Natural Evolution Strategies and the population-based variant CMA-ES are tested on the three neural network architectures VGG-16, Inception-v3 and ResNet-50 which were trained on ImageNet. It turned out that CMA-ES dominates the other variants. It achieves higher success rates and requires less queries to compute the adversarial examples. The authors of the paper assume that this is due to the use of a population which might result in a better exploration of the fitness landscape.

--

--

Daniel
Daniel

Written by Daniel

Software engineering, security, machine learning

No responses yet