Tyler Han

I am currently a PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I am part of the Robot Learning Lab where I am advised by Byron Boots. I am also an NSF Graduate Research Fellow.

Prior to UW, I completed my B.S. in Aerospace Engineering and B.S. in Computer Science at the University of Maryland, College Park. During my undergrad, I worked with Glen Henshaw and Patrick Wensing while at the Naval Research Laboratory in Washington, D.C.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  CV

profile photo

News

10/2024 Invited to review: International Conference on Robotics and Automation (ICRA) 2025, Robotics and Automation Letters (RA-L), and Transactions on Robotics (T-RO)
10/2024 "Transferable Reinforcement Learning via Generalized Occupancy Models" is accepted to NeurIPS 2024
05/2024 "Model Predictive Control for Aggressive Driving Over Uneven Terrain" is accepted to R:SS 2024.
05/2024 "Dynamics Models in the Aggressive Off-Road Driving Regime" is accepted to ICRA 2024 Workshop on Resilient Off-Road Autonomy
04/2024 Invited to review for ICRA 2024 Workshop on Resilient Off-Road Autonomy
03/2023 Awarded the National Science Founation Graduate Research Fellowship (NSF GRFP)

Research

I am interested in using machine learning for automating systems with complex, real-world dynamics. Humans are inexplicably efficient with limited information and experience — not only in controlling their own bodies but also machines and tools. How can human efficiency and adaptability be formalized then transferred to data pipelines for robotics?

project image

Transferable Reinforcement Learning via Generalized Occupancy Models


Chuning Zhu, Xinqi Wang, Tyler Han, Simon Du, Abhishek Gupta
Neural Information Processing Systems (NeurIPS), 2024
website / code / arXiv

Intelligent agents must be generalists, capable of quickly adapting to various tasks. In reinforcement learning (RL), model-based RL learns a dynamics model of the world, in principle enabling transfer to arbitrary reward functions through planning. However, autoregressive model rollouts suffer from compounding error, making model-based RL ineffective for long-horizon problems. [...]

project image

Model Predictive Control for Aggressive Driving over Uneven Terrain


Tyler Han, Alex Liu, Anqi Li, Alex Spitzer, Guanya Shi, Byron Boots
Robotics: Science & Systems (RSS), 2024
website / arXiv

Terrain traversability in unstructured off-road autonomy has traditionally relied on semantic classification, resource-intensive dynamics models, or purely geometry-based methods to predict vehicle-terrain interactions. While inconsequential at low speeds, uneven terrain subjects our full-scale system to safety-critical challenges at operating speeds of 7–10 m/s. This study focuses particularly on uneven terrain [...]

project image

Dynamics Models in the Aggressive Off-Road Driving Regime


Tyler Han, Sidharth Talia, Rohan Panicker, Preet Shah, Neel Jawale, Byron Boots
Workshop on Resilient Off-Road Autonomy, ICRA, 2024
code / arXiv

Current developments in autonomous off-road driving are steadily increasing performance through higher speeds and more challenging, unstructured environments. However, this operating regime subjects the vehicle to larger inertial effects, where consideration of higher-order states is necessary to avoid failures such as rollovers or excessive impact forces. Aggressive driving through Model [...]

project image

Learning Motor Primitives


Tyler Han, Carl Glen Henshaw
arXiv preprint, 2021
arXiv

In an undergraduate project, I tackled part of the challenge of teaching robots to perform motor skills from a small number of demonstrations. We proposed a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration. Our approach, named Autoencoder Dynamic Mode Decomposition [...]


Forked from Leonid Keselman's Website