Reinforcement Learning for Robotics

Deep learning is a highly promising tool for numerous fields. RSL is interested in using it for legged robots in two different directions: motion control and perception.

Learning Locomotion over Challenging Terrain

Legged locomotion can extend the operational domain of robots to some of the most challenging environments on Earth. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have increased in complexity but fallen short of the generality and robustness of animal locomotion. Here, we present a robust controller for blind quadrupedal locomotion in challenging natural environments. Our approach incorporates proprioceptive feedback in locomotion control and demonstrates zero-​shot generalization from simulation to natural environments. The controller is trained by reinforcement learning in simulation. The controller is driven by a neural network policy that acts on a stream of proprioceptive signals. The controller retains its robustness under conditions that were never encountered during training: deformable terrains such as mud and snow, dynamic footholds such as rubble, and overground impediments such as thick vegetation and gushing water. The presented work indicates that robust locomotion in natural environments can be achieved by training in simple domains.

external page Science Robotics Vol.5 eabc5986 (2020)
external page Project Homepage

By playing the video you accept the privacy policy of YouTube.Learn more OK

Motion control

RSL has been developing control policies using reinforcement learning. The main approach is a “sim-to-real” transfer (shown in Fig. 1) whereby a policy trained only in simulation is transferred to the real robot. This involves a development of a high-fidelity simulator as well as learning approaches suitable for motion control. The existing policies developed in this approach are agile, dynamics and efficient as shown in the video below.

Contact: Vassilios Tsounis <>, David Höller<>, Joonho Lee <>)

By playing the video you accept the privacy policy of YouTube.Learn more OK

Perception

To leverage our locomotion control skills in rugged, outdoor environments and to enable autonomous operation, accurate environment perception is necessary. Because the interaction with a multitude of different terrains and obstacles is difficult to model, we aim to learn this from real robot interactions with the environment. Once we learned to predict the properties of our surroundings, we can use this to improve locomotion (walking on asphalt should be different than on sand) and make intelligent choices for navigation (we can avoid difficult terrain entirely).

Contact: Lorenz Wellhausen (Lorenz Wellhausen<>, David Höller<>)
 

JavaScript has been disabled in your browser