VISION-BASED LOCALIZATION OF MULTIPLE ROBOTS
ON HETEROGENEOUS PLATFORM
Role: Computer Vision & Robot Platform Build
Duration: Aug 2015 - Dec 2016
The objective of this project was to implement a vision-based multi-robot control system on a heterogeneous system architecture of ARM Cortex and FPGA. With the ARM handling the overall control flow and the FPGA handling the off-loaded image processing function blocks, the system broadcasts navigation commands to each robot according to the position and orientation detected from the video stream input. By using an FPGA for the video processing, real-time processing and navigation control could be achieved. This project required learning about how to develop an image pipeline in hardware that exploits parallel-structured code, which included pixel programming, filter windows and line buffers, and data transfer from the FPGA to the ARM.
Instead of using Hardware Description Language (HDL) for FPGA prototypes, we worked to implement a heterogenous programming platform to process real-time HD video stream via FPGA to navigate the robot. The computer vision processing was handled in C++ which was later converted to HDL using Vivado High-Level Synthesis (HLS). I implemented the object detection pipeline, the wireless communication between RaspberryPi and FPGA, and simple navigation for 2 wheeled robots.