Hello World!

“Feedback is a method of controlling a system by reinserting into it the results of its past performance… if [this] is able to change the general method and pattern of performance, we have a process which may well be called learning.” — Norbert Weiner

I’m a Robotics M.S. student and researcher at the University of Pennsylvania GRASP Lab, advised by Professor George Pappas. I earned my B.S. in Mechanical Engineering from UCLA and plan to start a Ph.D. in Fall 2026. This page serves to track my work, and I’ll be trying to post short notes soon on topics I enjoy learning and how I’ve come to understand them.

Nature seems to suggest that intelligence has less to do with what something is made of and more to do with whether it can close the loop between perception, computation, and action. Slime molds, for example, trace shortest paths to their favorite food (oats) using chemical gradients; ants turn minimal individual capabilities into colony-level intelligence through decentralized coordination; octopuses push much of their perception and control into their arms, distributing two-thirds of their 500 million neurons across eight limbs; and plants, lacking a nervous system entirely, comprise roughly 80% of Earth’s biomass through signaling and tropisms unfolding across evolutionary timescales.

Robotics, to me, is humanity’s most direct and exciting attempt to engineer that same feedback loop. The term comes from Karel Čapek’s 1920 play R.U.R. via the Czech word robota, and it is enthralling to consider how a piece of fiction has turned into a serious technical discipline spanning mathematics, physics, biology, philosophy, and even art. I’m drawn to building autonomous systems because it forces us to reconcile our abstract ideas and algorithms with the inherent messiness of reality, and because getting these intelligent systems right gives us real leverage to tackle central problems tied to human and planetary flourishing.

I’m most interested in autonomy that combines learning + control for tasks executed on actual hardware:

  • Real-world RL: sample-efficient, uncertainty-aware learning that works on hardware
  • Planning + constrained optimization: MPC/trajectory optimization with learned models and policies
  • Complex manipulation: contact-rich, long-horizon skills
  • Multi-agent autonomy: coordination and learning under partial or conflicting information
  • Safety + guarantees: robustness, generalization, and reliable behavior on critical tasks