Projects
A curated list of some research and projects that I've worked or am working on.
SIPPing on path planning
Multi-agent pathfinding for warehouse robots [2018 - 2025]
While working at inVia Robotics, I spent a lot of time thinking about how to route robots around warehouses. This falls under an area of research known as "multi-agent path finding" (MAPF), where large numbers of agents must be coordinated to perform various tasks simultaneously. Typically these algorithms rely on a so-called "well-formed instance" of the MAPF problem, which the physical realities of our robots and deployments prohibited. This meant that we could not directly adapt algorithms from the research literature, and instead had to develop novel techniques. Our implementations went through several iterations, enabling smooth bidirectional flow of robots in warehouses with single-lane aisles.
[back to top]
Learning with Ambiguity
Modulatory feedback networks with conflict and ambiguity [2019]
This work lays the foundation for a framework of cortical learning based upon the idea of a "competitive column", which
is inspired by the functional organization of neurons in the cortex. A column describes a prototypical organization for
neurons that gives rise to an ability to learn scale, rotation, and translation invariant features. This expands upon
conflict learning by introducing a concept of neural ambiguity and an adaptive thresholding scheme. Ambiguity, which
captures the idea that "too many decisions lead to indecision", gives the network a way to resolve locally ambiguous
decisions. The framework is demonstrated on a large-scale (at the time of publication) network, learning an invariant
model of border ownership capable of processing contours from natural images. Combined with conflict learning, the
competitive column and ambiguity give a better intuitive understanding of how feedback, modulation, and inhibition may
interact in the brain to influence activation and learning
[back to top]
ic3po
Iterative Closest Point Plus Plain Optimization [2019]
A follow-up to rolodex planes, this work extended the frame-by-frame plane-based registration into a SLAM framework that could leverage both point and plane features. The algorithm smoothly interpolates between the efficiency of the plane-based features and the slower but always present point features to achieve real-time (sensor collection rate, 10Hz) localization and mapping. Provided state-of-the-art results when published.
[back to top]
Conflict Learning
A biologically plausible emergent border ownership neural network [2017]
The brain performs a vital computation early along visual processing known as "border ownserhip," in which neurons selectively respond to edges on the "inside" or "outside" of objects. Previous work focuses mostly on designing networks that can perform border ownership, whereas this work was concerned with how the brain learns to perform this operation. The computation of border ownership is believed to rely on an interplay of feedforward and feedback signals, and traditional approaches proved insufficient for learning the competing polarity responses required of border ownership. The "conflict learning" rule developed in this work successfuly captures the dynamics required for border ownership (as well as edge detection) by diverging from typical Hebbian-like rules, focusing on the ability to learn modulatory feedback.
[back to top]
Rolodex Planes
Plane based frame-to-frame registration [2013]
A robust plane finding algorithm that can be used in conjunction with plane based frame-to-frame registration to give
accurate real-time pose estimation. This was born out of support through the DARPA Robotics Challenge. The ultimate goal of this project was to
perform real-time simultaneous localization and mapping (SLAM) using planes extracted from spinning laser sensors such
as the Velodyne HDL-32E, which provides 360°
point clouds at a high rate. Many traditional algorithms for plane finding do not work well on this kind of data due
ot the structure of the Velodyne data. The first set of results from this were presented at IROS 2013.
[back to top]
cereal is a modern C++ serialization library that Randolph Voorhies and I developed. It is a header only library designed to be easily embedded in a project and was born out of frustrations with the Boost serialization library. It supports almost the entire standard library, is space and time efficient, and can serialize to binary, JSON, and XML out of the box. It was written to be easily extensible and has great documentation at its website.
[back to top]
Salient Green
Using symmetry to enhance bottom-up saliency [2009, 2012]
- Saliency Mapping Enhanced by Symmetry from Local Phase, published in ICIP 2012
- Source code (requires C++11 compiler)
Salient green started as a group project I did with Kevin Heins while an undergraduate at UCSD taking CSE 190A with Serge Belongie in 2009. The idea was to extend the basic Itti et al. model by introducing symmetry and manipulating the color space. We performed a simple experiment having humans judge masked images created using the algorithm and found that it outperformed the basic model. When I came to USC, I re-implemented the model and performed objective testing against other saliency models, including another by Kootstra that is symmetry based.
[back to top]
Along with several other people at iLab, I contributed towards NRT - a neuromorphic robotics toolkit. My work on NRT was mostly concerned with developing point cloud functionality, similar to the PCL (point cloud library). NRT is a framework for developing interchangeable processing modules that communicate by passing messages.
[back to top]
UCSD AUVSI UAS Team
An unmanned aerial system (UAS) [2006 - 2010]
While an undergraduate at UCSD, I participated in the UCSD AUVSI UAS team all four years I was there. The team competed in the annual international AUVSI UAS competition held in Maryland, placing as high as 2nd place in 2009 and having a 1st place journal in 2010. The competition consited of flying a UAV while searching for alpha-numeric targets on the ground, all autonomously. I was responsible for most of the software and imaging systems while on the team, designing an autonomous recognition system powered by the saliency algorithm described in Salient Green. Salient regions were segmented and then processed autonomously to identify the character, colors, and orientation of alpha-numeric targets. The final implementation used CUDA to run the analysis in real-time.
[back to top]
Stacked is an instruction set architecture developed out of a group project at UCSD for CSE 141/141L. The ISA is stack-based with no registers available to the programmer - all data is manipulated on local data stacks, a global data stack, or an address stack. The ISA and its simulator (developed in Java) was a finalist in the 2010 IEEE Computer Society competition. During CSE 141L, the entire ISA was also prototyped and demonstrated to work in Verilog.
[back to top]