Tyler Han

I am currently a PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I am part of the Robot Learning Lab where I am advised by Byron Boots. I am also an NSF Graduate Research Fellow.

Prior to UW, I completed my B.S. in Aerospace Engineering and B.S. in Computer Science at the University of Maryland, College Park. During my undergrad, I worked with Glen Henshaw and Patrick Wensing while at the Naval Research Laboratory in Washington, D.C.

Email  /  GitHub  /  LinkedIn

profile photo

Research

I am generally interested in techniques that combine machine learning and engineering. Within robotics, I am particularly interested in ill-defined engineering and autonomy challenges. Often, these problems require some "human intuition" to help constrain the solution. Some techniques I have been investigating involve learning from demonstration (LfD) and Inverse Reinforcement Learning (IRL).

project image

Model Predictive for Aggressive Driving over Uneven Terrain


Tyler Han, Alex Liu, Anqi Li, Alex Spitzer, Guanya Shi, Byron Boots
ArXiv, 2023
arxiv / website /

Terrain traversability in off-road autonomy has traditionally relied on semantic classification or resource-intensive dynamics models to capture vehicle-terrain interactions. However, our experiences in the development of a high-speed off-road platform have revealed several critical challenges that are not adequately addressed by current methods at our operating speeds of 7—10 m/s. [...]




Miscellaneous Projects

Other projects that are not necessarily research related or not published

project image

Off-Road Self-Driving


DARPA Robotic Autonomy in Complex Environments with Resiliency
2022-06-22 — Present
video / video #2 /

Since beginning my PhD, I have been primarily focused on work related to the DARPA RACER program as part of the UW team. The robot itself is a modified Polaris RZR with onboard sensing and compute. My work on the team has consisted of a range of jobs: [...]

project image

Learning Motor Primitives


Naval Research Laboratory
2021-01-01 — 2022-08-21 22:21:59 +0000
paper /

In an undergraduate project, I tackled part of the challenge of teaching robots to perform motor skills from a small number of demonstrations. We proposed a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration. Our approach, named Autoencoder Dynamic Mode Decomposition [...]


Forked from Leonid Keselman's Website