I am a postdoc at Vector Institute and University of Toronto with Rich Zemel, also collaborating with David Duvenaud and Roger Grosse. Previously, I was a postdoc in the lab of Matthias Bethge in Tübingen and a Ph.D. student at the University of Amsterdam under supervision of Arnold Smeulders. My research mainly focuses on gaining better understanding of open challenges in generative modeling, representation learning and robust decision-making. My background is in physics with a specialization in neuroscience.
I have been working on representation learning with generative models, on invertible neural networks and normalizing flows, on generalization under distribution shift as well as on the connection between invariance and adversarial examples.
R. Geirhos*, J.-H. Jacobsen*, C. Michaelis*, R. Zemel, W. Brendel, M. Bethge, F. Wichmann. Shortcut Learning in Deep Neural Networks. Under Submission, 2020. [Paper, Code]
D. Krueger, E. Caballero, J.-H. Jacobsen, A. Zhang, J. Binas, R. Le Priol, A. Courville. Out-of-Distribution Generalization via Risk Extrapolation (REx). Under Submission, 2020. [Paper, Code]
J. Behrmann*, P. Vicol*, K. C. Wang*, Roger Grosse, J.-H. Jacobsen. On the Non-invertibility of Invertible Neural Networks. Under submission, 2020.
An earlier version appeared in: ML with Guarantees Workshop, NeurIPS 2019. [Paper]
F. Tramer, J. Behrmann, N. Carlini, N. Papernot, J.-H. Jacobsen. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML, 2020. [Paper, Code]
C. Finlay, J.-H. Jacobsen, L. Nurbekyan, A. Oberman. How To Train Your Neural ODE. ICML, 2020. [Paper]
W. Grathwohl, K. C. Wang, J.-H. Jacobsen, D. Duvenaud, R. Zemel. Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling. ICML, 2020. [Paper]
W. Grathwohl, K. C. Wang*, J.-H. Jacobsen*, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. ICLR, 2020. (ORAL PRESENTATION) [Paper, Code]
E. Fetaya*, J.-H. Jacobsen*, W. Grathwohl, R. Zemel. Understanding the Limitations of Conditional Generative Models. ICLR, 2020. [Paper]
R. T. Q. Chen, J. Behrmann, D. Duvenaud, J.-H. Jacobsen. Residual Flows for Invertible Generative Modeling. NeurIPS, 2019. (SPOTLIGHT PRESENTATION) [Paper, Code]
An earlier version appeared in: INNF Workshop, ICML 2019. (CONTRIBUTED TALK) [Link]
Q. Li*, S. Hague*, C. Anil, J. Lucas, R. Grosse, J.-H. Jacobsen. Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. NeurIPS, 2019. [Paper, Code]
J. Behrmann*, W. Grathwohl*, R. T. Q. Chen, D. Duvenaud, J.-H. Jacobsen*. Invertible Residual Networks. ICML, 2019. (LONG ORAL PRESENTATION) [Paper, Code]
E. Creager, D. Madras, J.-H. Jacobsen, M. Weis, K. Swersky, T. Pitassi, R. Zemel. Flexibly Fair Representation Learning by Disentanglement. ICML, 2019. [Paper]
J.-H. Jacobsen, J. Behrmann, N. Carlini, F. Tramer, N. Papernot. Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. SafeML Workshop, ICLR 2019. [Paper, Code]
J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge. Excessive Invariance Causes Adversarial Vulnerability. ICLR, 2019. [Paper]
J.-H. Jacobsen, A.W.M. Smeulders, E. Oyallon. i-RevNet: Deep Invertible Networks. ICLR, 2018. [Paper, Code]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Blocks in Deep Residual Networks. BMVC, 2017. [Paper]
J.-H. Jacobsen, E. Oyallon, S. Mallat, A.W.M. Smeulders. Hierarchical Attribute CNNs. ICML PADL, 2017. [Paper, Code]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Frame Networks. Pre-print, 2017. [Paper]
J.-H. Jacobsen, J. v. Gemert, Z. Lou, A.W.M. Smeulders. Structured Receptive Fields in CNNs. CVPR, 2016. [Paper, Code]
J.-H. Jacobsen, A.W.M. Smeulders. Deep Learning for Neuroimage Classification. OHBM, 2015.