Carl-Johann Simon-Gabriel

 

Slides      Video

carl johann simon gabriel

Abstract

Any binary classifier (or score-function) can be used to define a dissimilarity between two distributions of points with positive and negative labels. Actually, many well-known distribution-dissimilarities are classifier-based dissimilarities: the total variation, the KL- or JS-divergence, the Hellinger distance, etc. And many recent popular generative modelling algorithms compute or approximate these distribution-dissimilarities by explicitly training a classifier: eg GANs and their variants. After a brief introduction to these classifier-based dissimilarities, I will focus on the influence of the classifier's capacity. I will start with some theoretical considerations illustrated on maximum mean discrepancies --a weak form of total variation that has grown popular in machine learning-- and then focus on deep feed-forward networks and their vulnerability to adversarial examples. We will see that this vulnerability is already rooted in the design and capacity of our current networks, and will discuss ideas to tackle this vulnerability in future.

Our speaker

Carl-Johann is a Postdoctoral Fellow at ETH Zurich.