Tyler Han

I am currently a PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I am part of the Robot Learning Lab where I am advised by Byron Boots. I am also an NSF Graduate Research Fellow.

Prior to UW, I completed my B.S. in Aerospace Engineering and B.S. in Computer Science at the University of Maryland, College Park. During my undergrad, I worked with Glen Henshaw and Patrick Wensing while at the Naval Research Laboratory in Washington, D.C.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  CV

profile photo

News

6/2025 Award committee member and organizer for Resilient Off-Road Autonomous Robotics Workshop at RSS 2025
10/2024 Invited to review: International Conference on Robotics and Automation (ICRA) 2025, Robotics and Automation Letters (RA-L), and Transactions on Robotics (T-RO)
10/2024 "Transferable Reinforcement Learning via Generalized Occupancy Models" is accepted to NeurIPS 2024
05/2024 "Model Predictive Control for Aggressive Driving Over Uneven Terrain" is accepted to R:SS 2024.
05/2024 "Dynamics Models in the Aggressive Off-Road Driving Regime" is accepted to ICRA 2024 Workshop on Resilient Off-Road Autonomy
04/2024 Invited to review for ICRA 2024 Workshop on Resilient Off-Road Autonomy
03/2023 Awarded the National Science Founation Graduate Research Fellowship (NSF GRFP)

Research

Animals need only to observe a behavior a handful of times before imitating them through experience. However, current machine learning methods require orders of magnitude more data to imitate a demonstation. I am interested in methods which enable robots to attain the same level of efficiency and robustness as animals.

project image

Model Predictive Adversarial Imitation Learning for Planning from Observation


Tyler Han, Yanda Bao, Bhaumik Mehta, Gabriel Guo, Anubhav Vishwakarma, Emily Kang, Sanghun Jung, Rosario Scalise, Jason Zhou, Bryan Xu, Byron Boots
Under Review, 2025
/ Preprint Coming Soon

Human demonstration data is often ambiguous and incomplete, motivating imitation learning approaches that also exhibit reliable planning behavior. A common paradigm learns a reward function via Inverse Reinforcement Learning (IRL) and deploys it using Model Predictive Control (MPC) to reliably imitate expert behavior. In this work, we propose replacing the [...]

project image

Wheeled Lab: Modern Sim2Real for Low-Cost, Open-Source Wheeled Robotics


Tyler Han, Preet Shah, Sidharth Rajagopal, Yanda Bao, Sanghun Jung, Sidharth Talia, Gabriel Guo, Bryan Xu, Bhaumik Mehta, Emma Romig, Rosario Scalise, Byron Boots
arXiv preprint, 2025
website / code / poster / arXiv / NVIDIA spotlight / Q&A video

Simulation has been pivotal in recent robotics milestones and is poised to play a prominent role in the field’s future. However, recent robotic advances often rely on expensive and high-maintenance platforms, limiting access to broader robotics audiences. This work introduces Wheeled Lab, a framework for the low-cost, open-source wheeled platforms [...]

project image

Distributional Successor Features Enable Zero-Shot Policy Optimization


Chuning Zhu, Xinqi Wang, Tyler Han, Simon Du, Abhishek Gupta
Neural Information Processing Systems (NeurIPS), 2024
website / code / arXiv

Intelligent agents must be generalists, capable of quickly adapting to various tasks. In reinforcement learning (RL), model-based RL learns a dynamics model of the world, in principle enabling transfer to arbitrary reward functions through planning. However, autoregressive model rollouts suffer from compounding error, making model-based RL ineffective for long-horizon problems. [...]

project image

Model Predictive Control for Aggressive Driving over Uneven Terrain


Tyler Han, Alex Liu, Anqi Li, Alex Spitzer, Guanya Shi, Byron Boots
Robotics: Science & Systems (RSS), 2024
website / arXiv

Terrain traversability in unstructured off-road autonomy has traditionally relied on semantic classification, resource-intensive dynamics models, or purely geometry-based methods to predict vehicle-terrain interactions. While inconsequential at low speeds, uneven terrain subjects our full-scale system to safety-critical challenges at operating speeds of 7–10 m/s. This study focuses particularly on uneven terrain [...]

project image

Dynamics Models in the Aggressive Off-Road Driving Regime


Tyler Han, Sidharth Talia, Rohan Panicker, Preet Shah, Neel Jawale, Byron Boots
Workshop on Resilient Off-Road Autonomy, ICRA, 2024
code / arXiv

Current developments in autonomous off-road driving are steadily increasing performance through higher speeds and more challenging, unstructured environments. However, this operating regime subjects the vehicle to larger inertial effects, where consideration of higher-order states is necessary to avoid failures such as rollovers or excessive impact forces. Aggressive driving through Model [...]

project image

Learning Motor Primitives


Tyler Han, Carl Glen Henshaw
arXiv preprint, 2021
arXiv

In an undergraduate project, I tackled part of the challenge of teaching robots to perform motor skills from a small number of demonstrations. We proposed a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration. Our approach, named Autoencoder Dynamic Mode Decomposition [...]


Forked from Leonid Keselman's Website