I am a postdoc at Vector Institute with Rich Zemel, also collaborating with David Duvenaud and Roger Grosse. Previously, I was a postdoc in the lab of Matthias Bethge in Tübingen and a Ph.D. student at the University of Amsterdam under supervision of Arnold Smeulders. My research mainly focuses on gaining better understanding of open challenges in generative modeling, representation learning and robust decision-making. My background is in physics with a specialization in neuroscience.
I have been working on representation learning with generative models, on invertible neural networks and normalizing flows, on generalization under distribution shift as well as on the connection between invariance and adversarial examples.
W. Grathwohl, K. C. Wang*, J.-H. Jacobsen*, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. Under Submission, 2019. [Paper]
J. Behrmann*, P. Vicol*, K. C. Wang*, Roger Grosse, J.-H. Jacobsen. On the Invertibility of Invertible Neural Networks. NeurIPS workshop on Machine Learning with Guarantees, 2019.
R. T. Q. Chen, J. Behrmann, D. Duvenaud, J.-H. Jacobsen. Residual Flows for Invertible Generative Modeling. NeurIPS, 2019. (SPOTLIGHT) [Paper, Code]
Q. Li*, S. Hague*, C. Anil, J. Lucas, R. Grosse, J.-H. Jacobsen. Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. NeurIPS, 2019. [Paper, Code]
E. Fetaya*, J.-H. Jacobsen*, R. Zemel. Conditional Generative Models are not Robust. Under Submission, 2019. [Paper]
J. Behrmann*, W. Grathwohl*, R. T. Q. Chen, D. Duvenaud, J.-H. Jacobsen*. Invertible Residual Networks. ICML, 2019. (LONG ORAL) [Paper, Code]
E. Creager, D. Madras, J.-H. Jacobsen, M. Weis, K. Swersky, T. Pitassi, R. Zemel. Flexibly Fair Representation Learning by Disentanglement. ICML, 2019. [Paper]
J.-H. Jacobsen, J. Behrmann, N. Carlini, F. Tramer, N. Papernot. Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. SafeML Workshop, ICLR 2019. [Paper, Code]
J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge. Excessive Invariance Causes Adversarial Vulnerability. ICLR, 2019. [Paper]
J.-H. Jacobsen, A.W.M. Smeulders, E. Oyallon. i-RevNet: Deep Invertible Networks. ICLR, 2018. [Paper, Code]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Blocks in Deep Residual Networks. BMVC, 2017. [Paper]
J.-H. Jacobsen, E. Oyallon, S. Mallat, A.W.M. Smeulders. Hierarchical Attribute CNNs. ICML PADL, 2017. [Paper, Code]
J.-H. Jacobsen, B. de Brabandere, A.W.M. Smeulders. Dynamic Steerable Frame Networks. Pre-print, 2017. [Paper]
J.-H. Jacobsen, J. v. Gemert, Z. Lou, A.W.M. Smeulders. Structured Receptive Fields in CNNs. CVPR, 2016. [Paper, Code]
J.-H. Jacobsen, A.W.M. Smeulders. Deep Learning for Neuroimage Classification. OHBM, 2015.
J.-H. Jacobsen, J. Stelzer, T. H. Fritz, G. Chételat, R. L. Joie, R. Turner. Why musical memory can be preserved in advanced Alzheimer's disease. BRAIN, 2015. [Paper] [Scientific commentary by Clark and Warren].
R. Turner, J.-H. Jacobsen. What stays when everything goes. OUPblog, 2015. Oxford University Press Blog.
Jörn-Henrik Jacobsen
Vector Institute
MaRS Centre, West Tower
661 University Ave., Suite 710
Toronto, ON M5G 1M1