PhD Student at Universite de Montreal (MILA).
If you're interested in collaborating or discussing ideas feel free to reach out.
-How the brain works.
-Limitations in neural networks, especially those that coincide with things that humans do well.
-History and language.
I'll post some short essays of mine here when I get around to writing them.
I did my undergraduate at Johns Hopkins University, where I worked with Mark Dredze. After that I worked at the demand forecasting group at Amazon as a research scientist. Then I started my PhD at the Universite de Montreal (MILA). I have done internships at Preferred Networks (working with Takeru Miyato) and Google Brain (working with David Ha).
For an automatically updated list check out my google scholar profile
Main Conference Publications: Alex Lamb*, Tarin Clanuwat*, Asanobu Kitamoto. KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition with Deep Learning. International Conference on Document Analysis and Recognition (ICDAR) 2019. [Oral, ~5% Acceptance Rate]
Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, David Lopez-Paz. Interpolation Consistency Training for Semi-Supervised Learning. International Joint Conference on Artificial Intelligence (IJCAI) 2019. [17.9% Acceptance Rate] Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Denis Kazakov, Yoshua Bengio, Michael C Mozer. State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations. International Conference on Machine Learning (ICML) 2019. [Long Oral, 4.5% Acceptance Rate] Alex Lamb*, Vikas Verma*, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio. Manifold Mixup: Learning Better Representations by Interpolating Hidden States. International Conference on Machine Learning (ICML) 2019. [22.6% Acceptance Rate] Tarin Clanuwat, Alex Lamb, Asanobu Kitamoto. End-to-End Pre-Modern Japanese Character (Kuzushiji) Spotting with Deep Learning. Information Processing Society of Japan Conference on Digital Humanities (Jinmoncom), 2018. Best Paper Award (1/60 accepted papers)
Alex Lamb, Devon Hjelm, Yaroslav Ganin, Joseph Paul Cohen, Aaron Courville, Yoshua Bengio. GibbsNet: Iterative Adversarial Inference for Deep Graphical Models. Neural Information Processing Systems (NIPS), 2017. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. Adversarially Learned Inference. International Conference on Learning Representations (ICLR), 2017. Alex Lamb*, Anirudh Goyal*, Ying Zhang, Saizheng Zhang, Aaron Courville, Yoshua Bengio. Professor Forcing: A New Algorithm for Training Recurrent Networks. Neural Information Processing Systems (NIPS), 2016. Alex Lamb, Michael J. Paul, Mark Dredze. Separating Fact from Fear: Tracking Flu Infections on Twitter. North American Chapter of the Association for Computational Linguistics (NAACL), 2013.
Algorithms and Theory Research Projects:
Adversarially Learned Inference
Applied Research Projects
Japanese Medieval (1100-1900) Document Recognition
When I worked at Amazon (2013-2015), I developed demand forecasting systems using deep learning.
Twitter Flu Analysis
We built a system to analyze tweets about the flu to discriminate between people having the flu and people simply discussing the flu. Using the system the results are significantly closer to CDC's estimated infection rates. This was oral at NAACL 2013 and received extensive press coverage.
Best Paper Award, Japanese Information Processing Society of Japan, 2018
Deep Learning Poems (20% original)