Hello World!

“Feedback is a method of controlling a system by reinserting into it the results of its past performance… if [this] is able to change the general method and pattern of performance, we have a process which may well be called learning.” — Norbert Weiner

I’m a Robotics M.S. student and researcher at the University of Pennsylvania GRASP Lab, advised by Professor George Pappas. I earned my B.S. in Mechanical Engineering from UCLA and plan to start a Ph.D. in Fall 2026. This page serves to track my work, and I’ll be trying to post short notes soon on topics I enjoy learning and how I’ve come to understand them.

Nature suggests intelligence is less about substrate and more about closing the loop between perception, computation, and action. Slime molds are able to trace shortest paths to their favorite food (oats) using chemical gradients; ants turn minimal individual capabilities into colony-level intelligence through decentralized coordination; octopuses push much of their perception and control into their arms, distributing two-thirds of their 500 million neurons across eight limbs; and plants, lacking a nervous system entirely, comprise roughly 80% of Earth’s biomass through signaling and tropisms unfolding across evolutionary timescales.

To me, robotics is our most direct and exciting attempt to engineer that same loop on physical systems. Coined by Karel Čapek’s 1920 play R.U.R. (from the Czech robota), the field grew out of fiction and now sits at an intriguing intersection of science, art, math, philosophy, and economics. My passion for all—things autonomy comes from a fascination with these worlds, and from a conviction that we should build and leverage these intelligent systems to address the problems that shape human and planetary flourishing.

I’m most interested in autonomy that combines learning + control for tasks executed on actual hardware:

  • Real-world RL: sample-efficient, uncertainty-aware learning that works on hardware
  • Planning + constrained optimization: MPC/trajectory optimization with learned models and policies
  • Complex manipulation: contact-rich, long-horizon skills
  • Multi-agent autonomy: coordination and learning under partial or conflicting information
  • Safety + guarantees: robustness, generalization, and reliable behavior on critical tasks