I am a research scientist at FAIR@Meta AI in Menlo Park. I am the team lead of the Cortex team which is focussed on learning and evaluating foundation models for robotics. My research interests lie at the intersection of machine learning and robotics, with a special interest in lifelong learning. I have a Ph.D. from the University of Southern California and was advised by Stefan Schaal. My publications are available on my Google Scholar page and my open source contributions can be found on my Github profile.
Announcements
Nov 2024 | I am looking for a Research Scientist for my team. If you're interested in research at the intersection of manipulation, world models and imitation learning and are looking for research scientist positions, please contact me! See our job posting here . |
Research Updates!
April 2024 | We’re releasing OpenEQA — the Open-Vocabulary Embodied Question Answering Benchmark. It measures an AI agent’s understanding of physical environments by probing it with open vocabulary questions like “Where did I leave my badge?”! See our blog post and website for more details. |
May 2023 | We're presenting BC-IRL at ICLR 2023! paper, code. In this work we analyze SOTA inverse reinforcement learning algorithms and show that reward functions are overfitting to the demonstrations used for training. We present an algorithm to train generalizable reward functions! |
March 2023 | You can find VC-1 on Huggingface! VC-1 Base |
March 2023 | We've open-sourced CortexBench and VC-1, find details on our website ! In short, CortexBench is a benchmark to evaluate visual foundation models for robotics on 17 simulated robotics tasks, and VC-1, a visual foundation model that on average achieves SOTA on all 17 tasks! We summarize our findings on training VC-1 in our paper Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?, you can find the code here |
January 2023 | Our Journal Paper on Multi-Modal Learning of Keypoint Predictive Models for Visual Object Manipulation has been published in Transactions on Robotics. We leverage keypoint models to learn structured visual dynamics model for general object manipulation. |
January 2021 | Check out our blog post on Teaching AI to manipulate objects using visual demos which highlights our work on learning from visual demonstrations via model-based (inverse) reinforcement learning |
January 2021 | Our work on Meta-Learning via Learned Loss has received the best student paper award at ICPR 2020 |
October 2020 | Our work on Model-Based Inverse Reinforcement Learning from Visual Demonstrations has been accepted at CoRL, website , video |
October 2020 | We've open-sourced our library Differentiable Robot Models |
Last updated on 2024-11-20