Jörn-Henrik Jacobsen

me
Contact: j.jacobsen [at] vectorinstitute.ai

I am a Senior Research Scientist at Apple. Previously I was a postdoc at Vector Institute and University of Toronto with Rich Zemel, also collaborating with David Duvenaud and Roger Grosse. Previously, I was a postdoc in the lab of Matthias Bethge in Tübingen and a Ph.D. student at the University of Amsterdam under supervision of Arnold Smeulders. My research mainly focuses on gaining better understanding of open challenges in generative modeling, representation learning and robust decision-making. My background is in physics with a specialization in neuroscience.

I have been working on representation learning with generative models, on invertible neural networks and normalizing flows, on robustness under distribution shift as well as on the connection between invariance and generalization. Currently, I mostly think about ood generalization, (private) representation learning, algorithmic bias and the ethical questions implied by methods and common practices in AI/ML research. Also, I am an experimental musician and run my own diy record label (check other projects tab).

Publications

W. Grathwohl, X. Li, K. Swersky, M. Hashemi, J.-H. Jacobsen, M. Norouzi, G. Hinton. Scaling RBMs to High Dimensional Data with Invertible Neural Networks. ICML INNF+, 2020. [Paper]

S. Zhao, J.-H. Jacobsen, W. Grathwohl. Joint Energy-Based Models for Semi-Supervised Classification. ICML UDL, 2020. [Paper]

E. Creager*, J.-H. Jacobsen*, R. Zemel. Environment Inference for Invariant Learning. ICML UDL, 2020. [Paper]

R. Geirhos*, J.-H. Jacobsen*, C. Michaelis*, R. Zemel, W. Brendel, M. Bethge, F. Wichmann. Shortcut Learning in Deep Neural Networks. Under Submission, 2020. [Paper, Code]

The Gradient article we wrote about pigeons and shortcut learning: [Link]

D. Krueger, E. Caballero, J.-H. Jacobsen, A. Zhang, J. Binas, R. Le Priol, A. Courville. Out-of-Distribution Generalization via Risk Extrapolation (REx). Under Submission, 2020. [Paper, Code]

J. Behrmann*, P. Vicol*, K. C. Wang*, Roger Grosse, J.-H. Jacobsen. On the Non-invertibility of Invertible Neural Networks. Under submission, 2020.

An earlier version appeared in: ML with Guarantees Workshop, NeurIPS 2019. [Paper]

F. Tramer, J. Behrmann, N. Carlini, N. Papernot, J.-H. Jacobsen. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML, 2020. [Paper, Code]

C. Finlay, J.-H. Jacobsen, L. Nurbekyan, A. Oberman. How To Train Your Neural ODE. ICML, 2020. [Paper]

W. Grathwohl, K. C. Wang, J.-H. Jacobsen, D. Duvenaud, R. Zemel. Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling. ICML, 2020. [Paper]

W. Grathwohl, K. C. Wang*, J.-H. Jacobsen*, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. ICLR, 2020. (ORAL PRESENTATION) [Paper, Code]

E. Fetaya*, J.-H. Jacobsen*, W. Grathwohl, R. Zemel. Understanding the Limitations of Conditional Generative Models. ICLR, 2020. [Paper]

R. T. Q. Chen, J. Behrmann, D. Duvenaud, J.-H. Jacobsen. Residual Flows for Invertible Generative Modeling. NeurIPS, 2019. (SPOTLIGHT PRESENTATION) [Paper, Code]

An earlier version appeared in: INNF Workshop, ICML 2019. (CONTRIBUTED TALK) [Link]

Q. Li*, S. Hague*, C. Anil, J. Lucas, R. Grosse, J.-H. Jacobsen. Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. NeurIPS, 2019. [Paper, Code]

J. Behrmann*, W. Grathwohl*, R. T. Q. Chen, D. Duvenaud, J.-H. Jacobsen*. Invertible Residual Networks. ICML, 2019. (LONG ORAL PRESENTATION) [Paper, Code]

E. Creager, D. Madras, J.-H. Jacobsen, M. Weis, K. Swersky, T. Pitassi, R. Zemel. Flexibly Fair Representation Learning by Disentanglement. ICML, 2019. [Paper]

J.-H. Jacobsen, J. Behrmann, N. Carlini, F. Tramer, N. Papernot. Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. SafeML Workshop, ICLR 2019. [Paper, Code]

J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge. Excessive Invariance Causes Adversarial Vulnerability. ICLR, 2019. [Paper]

J.-H. Jacobsen, A.W.M. Smeulders, E. Oyallon. i-RevNet: Deep Invertible Networks. ICLR, 2018. [Paper, Code]

J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Blocks in Deep Residual Networks. BMVC, 2017. [Paper]

J.-H. Jacobsen, E. Oyallon, S. Mallat, A.W.M. Smeulders. Hierarchical Attribute CNNs. ICML PADL, 2017. [Paper, Code]

J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Frame Networks. Pre-print, 2017. [Paper]

J.-H. Jacobsen, J. v. Gemert, Z. Lou, A.W.M. Smeulders. Structured Receptive Fields in CNNs. CVPR, 2016. [Paper, Code]

J.-H. Jacobsen, A.W.M. Smeulders. Deep Learning for Neuroimage Classification. OHBM, 2015.

J.-H. Jacobsen, J. Stelzer, T. H. Fritz, G. Chételat, R. L. Joie, R. Turner. Why musical memory can be preserved in advanced Alzheimer's disease. BRAIN, 2015. [Paper] [Scientific commentary by Clark and Warren].

On the news: BBC; Science News; Reddit Frontpage; MedicalXpress; The Verge; El Pais; Noisey; Max-Planck-Society; Spiegel Online

R. Turner, J.-H. Jacobsen. What stays when everything goes. OUPblog, 2015. Oxford University Press Blog.

Invited Talks

Deep Learning Summer School Lecture on Unsupervised Learning with Likelihood-based Generative Models - Edmonton, Canada; July 2019
ICML Workshop Invited Talk on Invertible Neural Nets and Normalizing Flows - Long Beach, USA; June 2019
ICML Workshop Invited Talk on Tractable Probabilistic Models - Long Beach, USA; June 2019
Google Brain, Toronto - Toronto, Canada; April 2019
Courant Institute, NYU - New York, USA; March 2019
IAS / U of Princeton CS - Princeton, USA; March 2019
Vector Institute - Toronto, Canada; June 2018
Amsterdam Data Science Deep Dive - Amsterdam, Netherlands; January 2018
Max-Planck-Institute for Intelligent Systems - Tübingen, Germany; September 2017
University of Toronto - Toronto, Canada; September 2017
Google Brain, Toronto - Toronto, Canada; September 2017
University of Oxford: VGG Seminar - Oxford, United Kingdom; September 2017
Facebook AI Research - New York, USA; August 2017
University of Tübingen - Tübingen, Germany; May 2017
ICML Workshop Invited Talk on Data-efficient ML - New York City, USA; June 2016

Reviewing

NeurIPS2019 (Top 50% Reviewer)
ICML2019
ICLR2019
NIPS2018 (Top Reviewer w/ free admission)
CVPR2018 (Best Reviewer)
ICML2018
ICLR2017
NIPS2017
ICML2017
ICLR2016

Jörn-Henrik Jacobsen
Vector Institute
MaRS Centre, West Tower
661 University Ave., Suite 710
Toronto, ON M5G 1M1