Evolution Strategies can be used to create adversarial examples to fool image classifiers in a black-box setting and three variants are compared in a recently published paper. Compared to a white-box setting where the adversary has full access to the neural network and knows its architecture and parameters, a black-box…

--

--

The authors of a recently published research paper show how to inject backdoors in a machine learning model that is not active and cannot be detected in the original uncompressed model but which becomes active only when the model runs in compressed form, e.g. when deployed on a mobile phone…

--

--