Event Camera

Dynamic vision sensors (DVS) are neuromorphic devices that produce vision events asynchronously in response to changes in light intensity local to each pixel. This is in contrast to conventional vision sensors that produce images of the entire field-of-view at a fixed frame rate. Together with other desirable imaging properties such as having high dynamic range and high temporal resolution, DVS sensors are promising technologies for use in demanding edge scenarios where energy-efficient intelligent computations are needed. The goal of this project is to address this need by developing an advanced hardware-software architecture that natively supports co-design of DVS vision algorithms optimized for deployment in demanding edge applications.

Multi-Object Tracking using REMOT

FIGURE 1 shows the overall hardware/software architecture of our attention-guided Multi-Object Tracking framework using event camera on FPGA called REMOT. REMOT incorporates a parallel set of reconfigurable hardware attention units (AUs) that work in tandem with a modular attention-guided software framework running in the attached processor. Each hardware AU autonomously adjusts its region of attention by processing each vision event as they are produced by the event camera.

REMOT: A HW/SW Architecture for Attention-Guided Multi-Object Tracking

REMOT is capable of processing 0.43–2.22 million events per second at 1.75–5.68 watts, making them suitable for real-time operations while maintaining good MOT accuracy in our target datasets. When compared with a software-only implementation using the same edge platforms, our HW-SW implementation results in up to 33.6 times higher event processing throughput and 25.9 times higher power efficiency.