Daniel
May 11, 2021

--

Evolution Strategies can be used to create adversarial examples to fool image classifiers in a black-box setting and three variants are compared in a recently published paper. Compared to a white-box setting where the adversary has full access to the neural network and knows its architecture and parameters, a black-box setting is more realistic. The two population-less variants (1+1)-ES and Natural Evolution Strategies and the population-based variant CMA-ES are tested on the three neural network architectures VGG-16, Inception-v3 and ResNet-50 which were trained on ImageNet. It turned out that CMA-ES dominates the other variants. It achieves higher success rates and requires…

--

--

Daniel

Software engineering, security, machine learning