Jörn-Henrik Jacobsen

me
Contact: jorn [at] mailfence.com

I am a Researcher and manager at Apple and founder of the Health AI team Zurich, an AI research team focusing on questions of robustness, generalization, interpretability, mechanistic understanding and in-silico hardware design. Previously, I was the first-ever postdoc at Vector Institute and University of Toronto with Rich Zemel, also collaborating with David Duvenaud and Roger Grosse. Prior to that, I did another postdoc in the lab of Matthias Bethge in Tübingen and was a Ph.D. student at the University of Amsterdam under supervision of Arnold Smeulders. My research mainly focuses on gaining better understanding of open challenges in generative modeling, representation learning and robust decision-making. My background is in physics with a specialization in neuroscience.

I have been working on representation learning with generative models, on invertible neural networks and normalizing flows, on robustness under distribution shift as well as on the connection between invariance and generalization. Currently, I mostly think about ood generalization, (private) representation learning, algorithmic bias and the consequences implied by methods and common practices in AI/ML research.

Publications [Google Scholar]

A. Wehenkel, J. Behrmann, A. Miller, G. Sapiro, O. Sener, M. Cuturi, J.-H. Jacobsen. Simulation-based Inference for Cardiovascular Models. Under Submission. [Paper]

O. Senouf, J. Behrmann, J.-H. Jacobsen, P. Frossard, E. Abbe, A. Wehenkel. Inferring Cardiovascular Biomarkers with Hybrid Model Learning. NeurIPS 2023 Deep Inverse Workshop. [Paper]

S. Di, E. de Bézenac, E. Fox, J.-H. Jacobsen, A. Karpatne, V. Kashtanova, G. Louppe, N. Takeishi, A. Wehenkel. Synergy of Scientific and Machine Learning Modeling ("SynS & ML"). ICML 2023 Workshop Organizer. [Website]

A. Blaas, A. C. Miller, L. Zapella, J.-H. Jacobsen, C. Heinze-Deml. Considerations for Distribution Shift Robustness in Health. ICLR 2023, Workshop on Trustworthy Machine Learning for Healthcare. (ORAL PRESENTATION), Best Paper Honorable Mention. [Paper]

A. Wehenkel, J. Behrmann, H. Hsu, G. Sapiro, G. Louppe, J.-H. Jacobsen. Robust Hybrid Learning With Expert Augmentation. TMLR, 2023. [Paper]

M. Goldstein, J.-H. Jacobsen, O. Chau, A. Saporta, A. Puli, R. Ranganath, A. C. Miller. Learning Invariant Representations with Missing Data. CLeaR, 2022. [Paper]

D. Krueger, E. Caballero, J.-H. Jacobsen, A. Zhang, J. Binas, R. Le Priol, A. Courville. Out-of-Distribution Generalization via Risk Extrapolation (REx). ICML, 2021. (LONG ORAL PRESENTATION) [Paper, Code]

E. Creager, J.-H. Jacobsen, R. Zemel. Environment Inference for Invariant Learning. ICML, 2021. [Paper, Code]

An earlier version appeared in: Uncertainty in Deep Learning Workshop, ICML 2020. [Paper]

J. Behrmann*, P. Vicol*, K. C. Wang*, Roger Grosse, J.-H. Jacobsen. Understanding and Mitigating Exploding Inverses in Invertible Neural Networks. AISTATS, 2021. [Paper, Code]

An earlier version appeared in: ML with Guarantees Workshop, NeurIPS 2019. [Paper]

R. Geirhos*, J.-H. Jacobsen*, C. Michaelis*, R. Zemel, W. Brendel, M. Bethge, F. Wichmann. Shortcut Learning in Deep Neural Networks. Nature Machine Intelligence, 2020. [Paper, Code]

The Gradient article we wrote about pigeons and shortcut learning: [Link]
In the media: The Week; Deep Learning AI; Underrated ML

W. Grathwohl, X. Li, K. Swersky, M. Hashemi, J.-H. Jacobsen, M. Norouzi, G. Hinton. Scaling RBMs to High Dimensional Data with Invertible Neural Networks. ICML INNF+, 2020. [Paper]

S. Zhao, J.-H. Jacobsen, W. Grathwohl. Joint Energy-Based Models for Semi-Supervised Classification. ICML UDL, 2020. [Paper]

F. Tramer, J. Behrmann, N. Carlini, N. Papernot, J.-H. Jacobsen. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML, 2020. [Paper, Code]

C. Finlay, J.-H. Jacobsen, L. Nurbekyan, A. Oberman. How To Train Your Neural ODE. ICML, 2020. [Paper]

W. Grathwohl, K. C. Wang, J.-H. Jacobsen, D. Duvenaud, R. Zemel. Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling. ICML, 2020. [Paper]

W. Grathwohl, K. C. Wang*, J.-H. Jacobsen*, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. ICLR, 2020. (ORAL PRESENTATION) [Paper, Code]

E. Fetaya*, J.-H. Jacobsen*, W. Grathwohl, R. Zemel. Understanding the Limitations of Conditional Generative Models. ICLR, 2020. [Paper]

R. T. Q. Chen, J. Behrmann, D. Duvenaud, J.-H. Jacobsen. Residual Flows for Invertible Generative Modeling. NeurIPS, 2019. (SPOTLIGHT PRESENTATION) [Paper, Code]

An earlier version appeared in: INNF Workshop, ICML 2019. (CONTRIBUTED TALK) [Link]

Q. Li*, S. Hague*, C. Anil, J. Lucas, R. Grosse, J.-H. Jacobsen. Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. NeurIPS, 2019. [Paper, Code]

J. Behrmann*, W. Grathwohl*, R. T. Q. Chen, D. Duvenaud, J.-H. Jacobsen*. Invertible Residual Networks. ICML, 2019. (LONG ORAL PRESENTATION) [Paper, Code]

E. Creager, D. Madras, J.-H. Jacobsen, M. Weis, K. Swersky, T. Pitassi, R. Zemel. Flexibly Fair Representation Learning by Disentanglement. ICML, 2019. [Paper]

J.-H. Jacobsen, J. Behrmann, N. Carlini, F. Tramer, N. Papernot. Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. SafeML Workshop, ICLR 2019. [Paper, Code]

J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge. Excessive Invariance Causes Adversarial Vulnerability. ICLR, 2019. [Paper]

J.-H. Jacobsen, A.W.M. Smeulders, E. Oyallon. i-RevNet: Deep Invertible Networks. ICLR, 2018. [Paper, Code]

J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Blocks in Deep Residual Networks. BMVC, 2017. [Paper]

J.-H. Jacobsen, E. Oyallon, S. Mallat, A.W.M. Smeulders. Hierarchical Attribute CNNs. ICML PADL, 2017. [Paper, Code]

J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Frame Networks. Pre-print, 2017. [Paper]

J.-H. Jacobsen, J. v. Gemert, Z. Lou, A.W.M. Smeulders. Structured Receptive Fields in CNNs. CVPR, 2016. [Paper, Code]

J.-H. Jacobsen, A.W.M. Smeulders. Deep Learning for Neuroimage Classification. OHBM, 2015.

J.-H. Jacobsen, J. Stelzer, T. H. Fritz, G. Chételat, R. L. Joie, R. Turner. Why musical memory can be preserved in advanced Alzheimer's disease. BRAIN, 2015. [Paper] [Scientific commentary by Clark and Warren].

On the news: BBC; Science News; Reddit Frontpage; MedicalXpress; The Verge; El Pais; Noisey; Max-Planck-Society; Spiegel Online

R. Turner, J.-H. Jacobsen. What stays when everything goes. OUPblog, 2015. Oxford University Press Blog.

Invited Talks

Deep Learning Summer School Lecture on Unsupervised Learning with Likelihood-based Generative Models - Edmonton, Canada; July 2019
ICML Workshop Invited Talk on Invertible Neural Nets and Normalizing Flows - Long Beach, USA; June 2019
ICML Workshop Invited Talk on Tractable Probabilistic Models - Long Beach, USA; June 2019
Google Brain, Toronto - Toronto, Canada; April 2019
Courant Institute, NYU - New York, USA; March 2019
IAS / U of Princeton CS - Princeton, USA; March 2019
Vector Institute - Toronto, Canada; June 2018
Amsterdam Data Science Deep Dive - Amsterdam, Netherlands; January 2018
Max-Planck-Institute for Intelligent Systems - Tübingen, Germany; September 2017
University of Toronto - Toronto, Canada; September 2017
Google Brain, Toronto - Toronto, Canada; September 2017
University of Oxford: VGG Seminar - Oxford, United Kingdom; September 2017
Facebook AI Research - New York, USA; August 2017
University of Tübingen - Tübingen, Germany; May 2017
ICML Workshop Invited Talk on Data-efficient ML - New York City, USA; June 2016

Reviewing

ICLR2024 AC
NeurIPS2023 AC
NeurIPS2021 (Top Reviewer w/ free admission)
....probably some reviewing missing here
NeurIPS2019 (Top 50% Reviewer)
ICML2019
ICLR2019
NIPS2018 (Top Reviewer w/ free admission)
CVPR2018 (Best Reviewer)
ICML2018
ICLR2017
NIPS2017
ICML2017
ICLR2016

Jörn-Henrik Jacobsen