I am a Senior Research Scientist at Apple. Previously I was a postdoc at Vector Institute and University of Toronto with Rich Zemel, also collaborating with David Duvenaud and Roger Grosse. Previously, I was a postdoc in the lab of Matthias Bethge in Tübingen and a Ph.D. student at the University of Amsterdam under supervision of Arnold Smeulders. My research mainly focuses on gaining better understanding of open challenges in generative modeling, representation learning and robust decision-making. My background is in physics with a specialization in neuroscience.
I have been working on representation learning with generative models, on invertible neural networks and normalizing flows, on robustness under distribution shift as well as on the connection between invariance and generalization. Currently, I mostly think about ood generalization, (private) representation learning, algorithmic bias and the ethical questions implied by methods and common practices in AI/ML research. Also, I am an experimental musician and run my own diy record label (check other projects tab).
W. Grathwohl, X. Li, K. Swersky, M. Hashemi, J.-H. Jacobsen, M. Norouzi, G. Hinton. Scaling RBMs to High Dimensional Data with Invertible Neural Networks. ICML INNF+, 2020. [Paper]
S. Zhao, J.-H. Jacobsen, W. Grathwohl. Joint Energy-Based Models for Semi-Supervised Classification. ICML UDL, 2020. [Paper]
E. Creager*, J.-H. Jacobsen*, R. Zemel. Environment Inference for Invariant Learning. ICML UDL, 2020. [Paper]
J. Behrmann*, P. Vicol*, K. C. Wang*, Roger Grosse, J.-H. Jacobsen. On the Non-invertibility of Invertible Neural Networks. Under submission, 2020.
C. Finlay, J.-H. Jacobsen, L. Nurbekyan, A. Oberman. How To Train Your Neural ODE. ICML, 2020. [Paper]
W. Grathwohl, K. C. Wang, J.-H. Jacobsen, D. Duvenaud, R. Zemel. Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling. ICML, 2020. [Paper]
W. Grathwohl, K. C. Wang*, J.-H. Jacobsen*, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. ICLR, 2020. (ORAL PRESENTATION) [Paper, Code]
E. Fetaya*, J.-H. Jacobsen*, W. Grathwohl, R. Zemel. Understanding the Limitations of Conditional Generative Models. ICLR, 2020. [Paper]
E. Creager, D. Madras, J.-H. Jacobsen, M. Weis, K. Swersky, T. Pitassi, R. Zemel. Flexibly Fair Representation Learning by Disentanglement. ICML, 2019. [Paper]
J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge. Excessive Invariance Causes Adversarial Vulnerability. ICLR, 2019. [Paper]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Blocks in Deep Residual Networks. BMVC, 2017. [Paper]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Frame Networks. Pre-print, 2017. [Paper]
J.-H. Jacobsen, A.W.M. Smeulders. Deep Learning for Neuroimage Classification. OHBM, 2015.
J.-H. Jacobsen, J. Stelzer, T. H. Fritz, G. Chételat, R. L. Joie, R. Turner. Why musical memory can be preserved in advanced Alzheimer's disease. BRAIN, 2015. [Paper] [Scientific commentary by Clark and Warren].
R. Turner, J.-H. Jacobsen. What stays when everything goes. OUPblog, 2015. Oxford University Press Blog.
MaRS Centre, West Tower
661 University Ave., Suite 710
Toronto, ON M5G 1M1