I'm Maximilian Golub, a Data & Applied Scientist at Microsoft AI & Advanced Architecture
I'm currently working on secret projects that are cool and neat, but can't be talked about publicly, still...
Previously, I was a Master's student at UBC under the supervision of Mieszko Lis and Guy Lemieux, where I researched ways to prune neural networks while training, enabling future accelerator that can train with reduced memory.
In the past I published a paper in SYSML 2018: Full deep neural network training on a pruned weight budget An older version is available on arXiv. This pruning technique - DropBack - achieves state of the art results in weight pruning WRN-28-10 on CIFAR-10 during training.
Things I love include field programmable gate arrays, Neural Networks, Python, automation, photography, skiing, and cars.
Right now, I'm working on learning more about efficient DNN training methods.