News
05/2024 |
"Model Predictive Control for Aggressive Driving Over Uneven Terrain" is accepted to R:SS 2024.
|
05/2024 |
"Dynamics Models in the Aggressive Off-Road Driving Regime" is accepted to ICRA 2024 Workshop on Resilient Off-Road Autonomy. |
04/2024 |
Invited to Review for ICRA 2024 Workshop on Resilient Off-Road Autonomy |
03/2023 |
Awarded the National Science Founation Graduate Research Fellowship (NSF GRFP) |
|
Research
I am interested in using machine learning for automating systems with complex, real-world dynamics. Humans are
inexplicably efficient with limited information and experience — not only in controlling their own bodies but
also machines and tools. How can human efficiency and adaptability be formalized then transferred to data pipelines
for robotics?
|
|
Model Predictive Control for Aggressive Driving over Uneven Terrain
Tyler Han, Alex Liu, Anqi Li, Alex Spitzer, Guanya Shi, Byron Boots
Robotics: Science & Systems, 2024
website /
arXiv
Terrain traversability in unstructured off-road autonomy has traditionally relied on semantic classification, resource-intensive dynamics models, or purely geometry-based methods to predict vehicle-terrain interactions. While inconsequential at low speeds, uneven terrain subjects our full-scale system to safety-critical challenges at operating speeds of 7–10 m/s. This study focuses particularly on uneven terrain
[...]
|
|
Dynamics Models in the Aggressive Off-Road Driving Regime
Tyler Han, Sidharth Talia, Rohan Panicker, Preet Shah, Neel Jawale, Byron Boots
Workshop on Resilient Off-Road Autonomy, ICRA, 2024
arXiv
Current developments in autonomous off-road driving are steadily increasing performance through higher speeds and more challenging, unstructured environments. However, this operating regime subjects the vehicle to larger inertial effects, where consideration of higher-order states is necessary to avoid failures such as rollovers or excessive impact forces. Aggressive driving through Model
[...]
|
|
Transferable Reinforcement Learning via Generalized Occupancy Models
Chuning Zhu, Xinqi Wang, Tyler Han, Simon Du, Abhishek Gupta
arXiv preprint, 2024
website /
arXiv
Intelligent agents must be generalists, capable of quickly adapting to various tasks. In reinforcement learning (RL), model-based RL learns a dynamics model of the world, in principle enabling transfer to arbitrary reward functions through planning. However, autoregressive model rollouts suffer from compounding error, making model-based RL ineffective for long-horizon problems.
[...]
|
|
Learning Motor Primitives
Tyler Han, Carl Glen Henshaw
arXiv preprint, 2021
arXiv
In an undergraduate project, I tackled part of the challenge of teaching robots to perform motor skills from a small number of demonstrations. We proposed a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration. Our approach, named Autoencoder Dynamic Mode Decomposition
[...]
|
|