We describe a simulation environment that enables the development and testing of control policies for off-road mobility of autonomous agents. The environment is demonstrated in conjunction with the design and assessment of a reinforcement learning policy that uses sensor fusion and inter-agent communication to enable the movement of mixed convoys of human-driven and autonomous vehicles. Policies are learned on rigid terrain and are subsequently shown to transfer successfully to hard (silt-like) and soft (snow-like) deformable terrains. The enabling simulation environment is developed from the high fidelity, physics-based simulation engine Chrono. Five Chrono modules are employed herein: Chrono::Engine, Chrono::Vehicle, PyChrono, SynChrono and Chrono::Sensor. Vehicle’s are modeled using Chrono::Engine and Chrono::Vehicle and deployed on deformable terrain within the training/testing environment. Utilizing the Python interface to the C++ Chrono API called PyChrono and OpenAI Gym’s supporting infrastructure, training is conducted in a GymChrono learning environment. The GymChrono-generated policy is subsequently deployed for testing in SynChrono, a scalable, cluster-deployable multi-agent testing infrastructure built on MPI. SynChrono facilitates inter-agent communication and maintains time and space coherence between agents. A sensor modeling tool, Chrono::Sensor, supplies sensing data that is used to inform agents during the learning and inference processes. The software stack and the Chrono simulator are both open source. Relevant movies: [1].

This content is only available via PDF.
You do not currently have access to this content.