I'm studying the geometry of robot motion for manipulation. I'm especially interested in the Riemannian geometry of the motion optimization problem, how it interacts with workspace geometry in the presence of obstacles, and how it relates to second-order methods for nonlinear constrained optimization. Earlier work in these areas include algorithms for motion planing using fast trajectory optimizers (CHOMP) and methods for learning local mappings to latent Euclidean spaces for control. I'm currently working to generalize these ideas into a unified optimization framework called Riemannian Motion Optimization (RieMO) with specific applications to the dual arm manipulation platform Apollo at the Max Planck Institute for Intelligent Systems Autonomous Motion Department:
(See here for more videos.)
I earned my PhD from Carnegie Mellon's Robotics institute in 2009 studying imitation learning, structured prediction, and functional gradient learning and optimization techniques. Drew Bagnell, Martin Zinkevich, and I developed a methodology for training planning and control algorithms for robotics (Inverse Optimal Control (IOC)) using ideas from Maximum Margin Structured Classification (MMSC). We developed online, batch, and functional subgradient methods (exponentiated boosting) to efficiently learn within these frameworks.
Collectively, our framework is known as Maximum Margin Planning (MMP), and our specific class of linear and nonlinear gradient-based approaches to solving these problems is known as LEArning to seaRCH (LEARCH). This website describes a number of the applications, including footstep prediction, grasp prediction, heuristic learning, overhead navigation, LADAR classification, optical character recognition, which are all part of my own work, but these algorithms are generally applicable even beyond robotics and have been applied to other types of structured learning problems such as parsing. See additionally IOHC and CHOMP for methods addressing high-dimensional configuration spaces where the forward problem (optimal planning or control), itself, can be intractable.
History and experience
I completed my Ph.D. work at Carnegie Mellon University’s Robotics Institute under Professor J. Andrew Bagnell in 2009. Since then I've been at TTI-C on the University of Chicago Campus building robots, Intel Labs in both Seattle and Pittsburgh studying trajectory optimization, and Google developing large scale learning systems to assess the quality of Ad Landing Pages. I'm currently part of Stefan Schaal's the Autonomous Motion Department (AMD) at the Max Planck Institute of Intelligent Systems in Tübingen and Marc Toussaint's Machine Learning and Robotics lab at the University of Stuttgart where I teach and research motion optimization and learning for manipulation.