Sometimes you wake up. Sometimes the fall kills you. And sometimes, when you fall, you fly.
Hello, I am Mahi!
I am a second-year Ph.D. student studying Computer Science at NYU Courant working primarily with Prof. Lerrel Pinto. I work on machine learning problems that allow robots to learn both from humans and on their own. Before this, I was at MIT, working on Robust Machine Learning with Prof. Aleksander Madry, where I got my Masters and Undergraduate degrees.
Right now, my research goal is to figure out how to get robots to cohabitate and collaborate with us in a household settings. If you are not a robotics person, think of it this way: I plan to retire when I can tell a robot to cook me khichuri, and it can just look up recipes on YouTube and cook it for me.
Besides my graduate research, I keep busy doing a variety of different things. Just in 2020, I was helping Bangladesh with COVID-19 data analytics with their National Data Analytics Task Force, and helping a university transition to online learning. Since then, I've also helped out a friend's startup with some quantitative analytics. On my free time, I like reading books, and visiting the large trove of museums in New York.
My recent research projects, both at NYU and beyond.
Nur Muhammad (Mahi) Shafiullah, Lerrel Pinto
Reward-free, unsupervised discovery of skills is an attractive alternative to the bottleneck of hand-designing rewards in environments where task supervision is scarce or expensive. However, current skill pre-training methods, like many RL techniques, make a fundamental assumption – stationary environments during training. Traditional methods learn all their skills simultaneously, which makes it difficult for them to both quickly adapt to changes in the environment, and to not forget earlier skills after such adaptation. On the other hand, in an evolving or expanding environment, skill learning must be able to adapt fast to new environment situations while not forgetting previously learned skills. These two conditions make it difficult for classic skill discovery to do well in an evolving environment. In this work, we propose a new framework for skill discovery, where skills are learned one after another in an incremental fashion. This framework allows newly learned skills to adapt to new environment or agent dynamics, while the fixed old skills ensure the agent doesn’t forget a learned skill. We demonstrate experimentally that in both evolving and static environments, incremental skills significantly outperform current state-of-the-art skill discovery methods on both skill quality and the ability to solve downstream tasks. Videos for learned skills and code are made public on https://mahis.life/disk/
Jyothish Pari*, Nur Muhammad (Mahi) Shafiullah*, Sridhar Pandian Arunachalam, Lerrel Pinto
While visual imitation learning offers one of the most effective ways of learning from visual demonstrations, generalizing from them requires either hundreds of diverse demonstrations, task specific priors, or large, hard-to-train parametric models. One reason such complexities arise is because standard visual imitation frameworks try to solve two coupled problems at once: learning a succinct but good representation from the diverse visual data, while simultaneously learning to associate the demonstrated actions with such representations. Such joint learning causes an interdependence between these two problems, which often results in needing large amounts of demonstrations for learning. To address this challenge, we instead propose to decouple representation learning from behavior learning for visual imitation. First, we learn a visual representation encoder from offline data using standard supervised and self-supervised learning methods. Once the representations are trained, we use non-parametric Locally Weighted Regression to predict the actions. We experimentally show that this simple decoupling improves the performance of visual imitation models on both offline demonstration datasets and real-robot door opening compared to prior work in visual imitation. All of our generated data, code, and robot videos are publicly available at the project URL.
Kai Y. Xiao, Vincent Tjeng, Nur Muhammad (Mahi) Shafiullah, Aleksander Madry
We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4-13x speedup in verification times. An important feature of our methodology is its “universality,” in the sense that it can be used with a broad range of training procedures and verification approaches.
Topics that interests me and that I am working continously to learn more about.
GitHub repositories that I've built.
Articles I've written.
Randomly chosen favorite quotes from Goodreads.