Research
I am generally interested in techniques that combine machine learning and engineering. Within robotics, I am particularly
interested in ill-defined engineering and autonomy challenges. Often, these problems require some "human intuition" to help
constrain the solution. Some techniques I have been investigating involve learning from demonstration (LfD) and Inverse Reinforcement Learning (IRL).
|
Miscellaneous Projects
Other projects that are not necessarily research related or not published
|
|
Off-Road Self-Driving
DARPA Robotic Autonomy in Complex Environments with Resiliency
2022-06-22
— Present
video /
video #2 /
Since beginning my PhD, I have been primarily focused on work related to the DARPA RACER program as part of the UW team. The robot itself is a modified Polaris RZR with onboard sensing and compute. My work on the team has consisted of a range of jobs: conducting field tests (experimental runs with new perception, planning, and/or control methods), research and development of optimal control algorithms, developing better robot software infrastructure, and more. I am now focused on improving the autonomous behavior through imitation and reinforcement learning.
|
|
Learning Motor Primitives
Naval Research Laboratory
2021-01-01
— 2022-08-21 22:21:59 +0000
paper /
In an undergraduate project, I tackled part of the challenge of teaching robots to perform
motor skills from a small number of
demonstrations. We proposed a novel approach
by joining the theories of Koopman Operators
and Dynamic Movement Primitives to Learning
from Demonstration. Our approach, named Autoencoder Dynamic Mode Decomposition (aDMD),
projects nonlinear dynamical systems into linear
latent spaces such that a solution reproduces the
desired complex motion. Use of an autoencoder in
our approach enables generalizability and scalabil-
ity, while the constraint to a linear system attains
interpretability.
We show results on the LASA Handwriting dataset but with training on
only a small fractions of the letters.
|
|