Welcome! My research spans statistical machine learning and its applications in healthcare and the sciences.
I am an Assistant Professor in the Dept. of Computer Science at Tufts University. Previously, I was a postdoctoral fellow in computer science at Harvard SEAS, advised by Prof. Finale Doshi-Velez. I did my Ph.D. in CS at Brown University in May 2016, advised by Prof. Erik Sudderth (now at UC-Irvine).
Recently, my work is motivated by two exciting clinical applications:
- antidepressant drug recommendations for patients with major depression
- forecasting need for interventions in the Intensive Care Unit (ICU)
These applications have inspired new contributions to machine learning:
Semi-supervised learning: Our paper at AISTATS 2018 fits latent variable models so that they provide accurate predictions (e.g. drug recommendations) and interpretable generative models, even when labeled examples are rare.
Explainable AI: Our paper at AAAI 2018 introduces "Tree Regularization", a method to optimize deep neural networks so learned class boundaries are similar to decision trees (the trees can then be inspected by domain experts).
[Aug 2018] I have joined the faculty at Tufts' Computer Science Department!
I'm actively looking for students (ugrad and Ph.D.) for various research projects. Please contact me if interested.
[Apr 2018] Best Paper Award at SoCalNLP 2018
Our winning 2-page short paper was a compact summary of our AISTATS 2018 paper: Semi-Supervised Prediction-Constrained Topic Models. Thanks to co-author Gabe for presenting the work, to the SoCal NLP organizers for hosting, and to Amazon for sponsoring the award.
[Jan 2018] Paper accepted to AISTATS 2018.
Our paper -- Semi-Supervised Prediction-Constrained Topic Models -- describes a new framework for training topic models and other latent variable models to improve supervised predictions while still providing good generative models with interpretable topics. The new approach fixes core issues with past methods like sLDA, and shines especially in semi-supervised tasks, when only a small fraction of training examples are labeled.
[Dec 2017] Presenting at NIPS 2017 Workshops
• Poster and Talk (by Mike Wu): Optimizing deep models with tree-regularization at Transparent and Interpretable ML workshop (NIPS TIML 2017) (will also appear in AAAI '18)
[Nov 2017] Paper accepted to AAAI 2018.
Our paper describes a new regularization method to
optimizerecurrent neural networks to have more interpretable decision boundaries (closer to the decision trees that clinicians like).
[Nov 2017] Invited talk at MIT Lincoln Laboratory
"Optimizing Machine Learning Models for Clinical Interpretability"
Slides: [slides.pdf, 5 MB]
[Sep 2017] Organizing Machine Learning for Health (ML4H) workshop at NIPS 2017
Please submit some awesome papers!
[Mar 2017] Presented paper on ICU intervention prediction at AMIA CRI '17
Nominated for Clinical Informatics Research Award (1 of 7 nominees)
[Feb 2017] Invited talk on BNPy at Boston Bayesians meetup
[Dec 2016] BNPy software tutorial at NIPS 2016 workshop
New: BNPy project website with example gallery
[Dec 2016] Posters at NIPS 2016 Workshops
[Sep 2016] Organizing Workshop at NIPS '16: Practical Bayesian Nonparametrics
Please consider submitting to our workshop: https://sites.google.com/site/nipsbnp2016/
[Aug 2016] Started post-doc at Harvard
You can now find me at my new office in Maxwell-Dworkin (MD 209).
[May 2016] Successful Ph.D. defense!
Many thanks to family and friends who supported me along the way.
[Jan 2016] Invited talks on my thesis.
I visited several research groups at Northeastern, U. Washington, and MIT to discuss results from my thesis work trying to make effective variational inference for clustering that scales to millions of examples. [slides PDF] [slides PPTX]
[Dec 2015] Invited talk at NIPS 2015 workshop.
I gave an invited talk at the Bayesian Nonparametrics: The Next Generation workshop about my thesis work building effective variational inference for models based on the Dirichlet process and its hierarchical variants. [slides PDF]
[Sept 2015] Paper accepted at NIPS 2015.
Our paper [PDF] describes a new algorithm for Bayesian nonparametric hidden Markov models that can handle hundreds of sequences and add or remove hidden states during a single training run.
[May 2015] Paper accepted at AISTATS 2015.
Our paper [PDF] describes a new algorithm for topic models that can effectively remove redundant or junk topics during a single training run.