#258: DART: Noise injection for robust imitation learning, with Michael Laskey

Toyota HSR Trained with DART to Make a Bed.

In this episode, Audrow Nash speaks with Michael Laskey, PhD student at UC Berkeley, about a method for robust imitation learning, called DART. Laskey discusses how DART relates to previous imitation learning methods, how this approach has been used for folding bed sheets, and on the importance of robotics leveraging theory in other disciplines.

To learn more, see this post on Robohub from the Berkeley Artificial Intelligence Research (BAIR) Lab.

Michael Laskey

Michael Laskey is a Ph.D. Candidate in EECS at UC Berkeley, advised by Prof. Ken Goldberg in the AUTOLAB (Automation Sciences). Michael’s Ph.D. develops new algorithms for Deep Learning of robust robot control policies and examines how to reliably apply recent deep learning advances for scalable robotics learning in challenging unstructured environments. Michael received a B.S. in Electrical Engineering from the University of Michigan, Ann Arbor. His work has been nominated for multiple best paper awards at IEEE, ICRA, and CASE and has been featured in news outlets such as MIT Tech Review and Fast Company.

Links


VISIT THE SOURCE ARTICLE
Author: Audrow Nash