dvs dvs technologies hold the potential to revolutionize imaging systems by enabling asynchronous, event-based image acquisition. dvs pixels generate and transmit events only when there is a change in light intensity of a pixel. This approach has many advantages compared to cis, such as: (i) higher dynamic range, (ii) higher sampling rates, (iii) lower bandwidth requirements between the sensor and the processing unit, and (iv) lower power consumption. These characteristics make dvs attractive sensors for energy-constrained scenarios such as the iot applications.
In this paper, we focus on the application of dvs based systems to pedestrian detection. A common solution to this problem involves streaming data from a cis to a processing module that runs the detection algorithm. Since the raw data from the imaging sensor can be overwhelming, usually the images are compressed before transmission. This approach (i) requires a large bandwidth or low frame rate to stream the data in a bandwidth constrained environment, and (ii) raises inherent privacy concerns, as streamed images may be accessed by malicious third-party actors. Inference at the edge [edgeFacebook]
, where data acquisition and processing are performed on-device, has been proposed as a solution for these problems. Unfortunately, the amount of energy required for inference at the edge when using cis limits its applicability. Near-chip feature extraction and data compression has the potential to provide a middle-ground solution.
Towards this goal, we propose a near-chip filtering architecture for pedestrian detection (Fig. 1). Our solution requires low bandwidth for transmitting the intermediate representations between the sensor and the processing platform. Moreover, it enhances privacy because of lossy subsampling, which makes it impossible to recover the original event representation. A single compressed packet issued from our near-chip filter has a total length of on average, and may be streamed through low-bitrate channels to a centralized networking node (Fig. 2).
Contributions: we have two main contributions 1) A low-complexity hardware implementation of an event-filter suited for dvs which reduces the bandwidth required to transmit the events by up to , targeted for a pedestrian detection system, and 2) An efficient detection algorithm that uses our intermediate representation, using a 32-bit microcontroller architecture.
Ii Related Work
We start with an overview of related work. The use of dvs for detection and pattern recognition has received recent attention[Kostas2019-DVSSurvey]. Usual target tasks include digit recognition and simple shapes such as card suits [timeSurfaces]faceDetection], and pedestrian detection [personDetectorCars, ryu2017, Jiang2019-ICRA]. Most of these approaches are implemented on gpu or microprocessor-based architectures and are not specifically targeted for iot applications. Typically, iot applications require low energy consumption due to their strict energy budgets.
The asynchronous and low-bandwidth nature of event cameras make them potentially ground-breaking for iot. However, dvs sensors are inherently noisy, making their application challenging. Recent work addresses the filtering of dvs noise [Khodamoradi2017-transactions, linares2017-filtering]. A description of filtering techniques with their respective hardware implementations is presented in [linares2019-filtersFpga]. However, these filters are targeted for high-bandwidth applications, and they are not specifically suited for bandwidth reduction. Hence, these filters are not necessarily suitable for iot scenarios.
Several end-to-end iot detection architectures have been proposed as well. In [rusci2018-thesis], Rusci et al. showcased the advantages of the sensor for always-on applications, by coupling an event-based image sensor with a PULPv3 processor. While significant reductions in energy consumption are shown in this work, the event stream is sent to an embedded platform without any further preprocessing. Thus, this method requires the processing platform to be near the dvs due to the bandwidth required to transmit the events. In [luca2019-pcarect], the authors present a FPGA suitable architecture for DVS detection using pca. While this architecture has good performance on classification, it is not particularly targeted for low-power architectures as they use a high-end fpga family. Other end-to-end architectures, such as TrueNorth [arnon2017-cvpr-pattern] make use of specific neuromorphic hardware to process the event stream. In this work, the gesture recognition task is analyzed, obtaining an accuracy of 96.5% when detecting 11 different gestures, while consuming . Compared to this work, our system has the advantage of (i) a lower energy consumption, and (ii) the capability of saving the low-bandwidth features for further down-stream applications.
The method presented in this paper consists of two main modules. Each one is composed of intermediate submodules. In this section, we will describe the algorithms and present implementation details. Specifically, we will address:
The filtering module: a network-aware component which runs near chip. It denoises the dvs stream and compresses the data for further downstreaming processing. The input of this module is the raw event stream issued from the sensor, and the outputs are discrete Huffman-coded packets that are transmitted to the detection module.
The detection module: receives the coded event representation packet from the filtering module, decodes it, and performs the pedestrian detection.
The combination of these two modules reduces the filtered event bandwidth while maintaining high detection accuracy.
Iii-a Filtering Module Description and Implementation
The filtering module consists of four main submodules: Event Parsing, Coincidence Detection, Aggregation-Downsampling, and Huffman encoding. The architecture was implemented using the Chisel 3 [bachrach2012chisel] hardware description language. We will describe these submodules in this section, as well as the sensor used during our experiments.
We used a sensor similar to the one described in [ryu2017-dvsarch], with an operated resolution of pixels. The event rate of our sensor was . The DVS was connected directly to the fpga, which was responsible for processing the events in the gaer packet format[ryuCVPR].
Iii-A2 Event Parser
The event parser submodule translates the gaer representation of the sensor to a (, , ) representation, where and are the row and column addresses of the pixel in the sensor array, and is the polarity encoded with two bits. While gaer allows for a significant bandwidth reduction between the sensor and the fpga, inside our architecture it is easier to work with the representation described above for memory addressing purposes.
Implementation: The Event Parsing submodule was implemented as an input fifo queue capable of storing 256 G-AER events, followed by a LUT-based decoder. The fifo allows us to handle a rapid burst of events from the sensor.
Iii-A3 Coincidence Detection
dvs pixel arrays are susceptible to background activity noise, which is displayed as impulse noise when dvs events are observed in a finite time window. Commonly, noisy pixels will be isolated compared to signal pixels, and thus may be removed by observing a sequence of pixel activations over space or time. Our filter works by detecting tuples of active pixels in the vertical and horizontal spatial directions.
The coincidence detection serves a dual purpose in our architecture: first, it collects events in a predefined time window of length . Then it performs a logical AND operation between adjacent pixels. This filter is inspired by the omd filter described in [linares2019-filtersFpga], but it has two fundamental differences: (i) we use simpler bitwise operations between the pixels instead of a complex integrate-and-fire model, and (ii) a coincidence is detected only if two pixels with the same polarity are detected. In our architecture, .
Implementation: The coincidence detection is implemented as two discrete memories () each of size bits. In phase 1, , the memory array starts in a cleared state, and it collects events until , when the window period has elapsed. In phase 2, from until , the memory array is read and the coincidences are evaluated by observing adjacent active vertical and horizontal pixels. At the same time, is collecting the events corresponding to this time window. The output of this submodule is composed of two channels, corresponding to the filter applied in the vertical and horizontal dimensions. Only active pixels are sent to the aggregation submodule.
On the fpga, all the memory blocks were implemented with dual-port BRAM slices. In the readout operation, a line buffer of pixels is used to store the intermediate pixels read. The coincidence detection submodule also propagates a signal indicating the start and end of a time window to the aggregation submodule.
Iii-A4 Aggregation and Subsampling
In a static dvs application, when binning events in a time window, the thickness of the edge depends on both the velocity of the object and the length of the time window. The function of the aggregation submodule is to increase the thickness of the edge to a normalized size before performing inference. For this, the aggregation submodule performs successive logical OR operations across the temporal dimension until the number of events in the aggregated frame is above a threshold. If the threshold is not achieved in a 5 time window, the frame buffer is cleared and no events are propagated.
After performing the aggregation operation, an max-pooling operation is performed to the aggregated time window. The max-pool operation aims to reduce the scale dependency of the object in the scene, and it reduces the dimensionality of the data. The subsampling submodule operates asynchronously, only when the aggregation submodule reports new data.
Implementation: The aggregation submodule described is duplicated in order to independently process each channel coming from the coincidence detection submodule. Each pixel streamed into aggregation is stored in the aggregated frame block memory (). At the start of every window, a window counter is incremented. This counter is used for implementing the temporal window integration limit of 5. Also, an event counter is kept for the number of pixels in the max pooled and aggregated window. At the end of every -sized window, the event counter is checked to be above the event threshold (1000 events). Given this condition, the aggregated frame is sent to subsampling.
The subsampling submodule is implemented using a block memory layout. Normally to store an image in memory, a column based layout is used, where pixels are stored sequentially based on columns index. A problem with using column indexing for max-pooling is that for each operation different memory blocks must be accessed. Instead, we decided to use a block memory layout: each memory block stores pixels in the same max-pooling area. Hence, a single memory read operation and comparison against 0 can perform max-pooling in a single clock cycle.
Iii-A5 Huffman encoder and Filter Interface
After aggregation, the output of the filter is a discrete packet of bits, corresponding to the pixel readouts of the downsampled aggregated image, for the vertical and horizontal channel. To further reduce the data bandwidth, we perform a Huffman encoding using a precomputed 256-word dictionary. On average, this results in reduction of the payload size.
Implementation: The Huffman filter is implemented by storing the codeword dictionary in BRAM and doing a lookup over the output of the aggregation submodule. The data is preceded by a 32-bit preamble header, and a low-complexity Fletcher-32 checksum [fletcher1982-checksum] is appended at the end of the packet (Fig. 3).
For testing purposes, we streamed the event representation using an UART serial interface between the fpga and the detection module. Nonetheless, other communication interfaces may be used by just changing the last submodule in the near-chip filter. For instance, we could use the same filtering scheme with inter-chip communication protocols, such as I2C or SPI, as well as other field-bus protocols.
Iii-B Detection Module Architecture and Implementation (BNN)
The detection module is used to perform binary classification for pedestrian detection from the sparse output of the filter. It is a cnn based architecture with binary weights as described in [XnorNet].
The network architecture, presented in Fig. 4, is composed of two hundred convolutional filters with binary () weights. As the output of the filter is encoded using a binary representation, the convolution operation is implemented as a binary AND operation with the positive and negative convolution filters, followed by a population count operation.
To accelerate our calculations, we used dsp simd instructions, as well as the fpu of the Cortex-M4 core. This processor does not have a dedicated population count instruction (popcnt), which is required for the neural network inference process. Therefore, we implemented this operation as a LUT in flash. While this approach increases the storage space required, it is a tradeoff for faster execution speeds.
The resulting convolution value is then scaled by a positive factor
, followed by a ReLU nonlinearity, and whole frame max-pooling. The detection value is obtained after a linear readout and thresholding.
Iv-a Filtering Module Hardware Implementation Results
The filter was synthesized on a Spartan-6 fpga. The maximum clock frequency achieved by our design was .
We observed that our solution uses few resources on a low-end fpga platform: the utilization of Registers is , LUTs and DSPs . The BRAM utilization is higher, mainly due to the intermediate representations required to acquire events during a time window in the coincidence detection and aggregation submodules. This utilization includes the FIFO buffer for the dvs packets in the event parser submodule, as well as the Chisel 3 DecoupledIO interfaces used to transmit information asynchronously between the submodules.
For reference, PCA-RECT [luca2019-pcarect] reports an utilization of , , and in a Zync-7020 SoC. Other detection architectures, such as NullHop [FPGA-DVS-Implementation], report an even higher resource utilization. Our architecture outperforms PCA-RECT on all utilization parameters but the number of BRAMs. This is not a surprise, as offloading the computational load of the detection algorithm to the computing node helps keep the slice count low. Our approach has low utilization, yet it keeps the bandwidth reduced without requiring a full detector implementation near-chip.
We also synthesized RetinaFilter, which is the implementation of the background activity filter for dvs by Linares-Barranco et al. [linares2019-filtersFpga]. For this, we used the same target architecture and configuration parameters that we used in our filter. This architecture requires , , and no DSPs. Undoubtedly, this architecture has less slices requirements compared to our filter, but it does not offer any gains in bandwidth reduction beyond just denoising the image. Thus, this implementation may not be directly applied in an iot environment for object detection.
To assess the power consumption of the near-chip architecture, we used Xilinx XPower Analyzer. Our module requires a total of at . For reference, the static power consumption of [luca2019-pcarect] is and the dynamic power consumption is .
Iv-B Detection Module Implementation Results
The detection module was implemented on a STM32F429 microcontroller. The output of the filtering module is fed into the microcontroller using an UART port working at . Packets are copied into the memory using dma, and the network starts processing them as soon as a full packet is detected and the checksum is verified. We note that the microcontroller is kept in sleep state when there are no packets sent through the UART, which corresponds to the case when there is no significant activity to output events in the dvs event filter. This helps reduce the overall energy consumption of the detection module.
Our network achieved an average of one inference every with the microcontroller core running at . It required of flash memory to operate: corresponded to our network parameters, and were used to accelerate calculations through precomputed look-up tables (such as the previously described population count). Finally, our network requires of RAM to operate.
Iv-C Filtering Module Performance
Throughout this work various approaches were tested in order to achieve high compression with little reduction in testing accuracy. The entire pipeline of our filtering module consists of: Coincidence Detection (CO), Aggregation (AG), Max Pooling (MP) and Huffman Coding. Each of these submodules reduce the bandwidth of the event stream and increase the detection accuracy. For the purposes of explaining the design choices made, we present various ablations of our method (Fig. 5
To perform our measurements, we used a dvs dataset of 273 clips of humans, each one with a duration of 2.5s, and 548–0.75s clips of non-humans. The dataset was split into a training set of 80% and a testing set of 20%. This dataset resulted in 92380 time windows of person and object movement. The raw event stream bandwidth was 22.25Mb/s on average.
First, we trained our binary neural network using our full pipeline, and we obtained a F1 testing score. Additionally, the measured bandwidth after filtering was .
The first ablations was removing the coincidence detection submodule. This resulted in lower testing F1 score and higher bandwidth compared to the full pipeline. This shows the effect of the coincidence detection removing noise: dvs noisy events increase bandwidth, and noisier images are harder to detect.
The second ablation was removing the aggregation submodule. This resulted in the testing F1 score was smaller and the output bandwidth of the filter was higher. Higher bandwidth is due to the additional frames from not temporally filtering. A lower testing F1 score without aggregation is due to less defined features for shallow learning using bnn.
The third ablation was changing the max-pooling size. The default value used in our pipeline was . When increasing this default value, bandwidth decreased and testing F1 score decreased. This is due to the lack of features due to the large dimensionality reduction. As for decreasing the max-pooling size, bandwidth increased, yet performance increased by little (near 1%). This performance increase was small enough, that we incurred this trade off for a smaller bandwidth model.
Our filter is capable of processing data as fast as the coincidence detection window (), resulting in the bandwidth reported below (). We may further reduce the bandwidth by temporally downsampling the detection rate, through a refractory period mechanism. For instance, if the filter produces an output every the final required bandwidth is on average (when some time windows are not propagated due to low event counts), and at most . This enables the use of our architecture on IoT applications using low rate wireless standards such as 802.15.4 [ieee802standard] and LoRA and NB-IoT [sinha2017-iotspeed].
V Conclusion and Further Work
This paper introduces a novel end-to-end system for pedestrian detection in iot environments. Our system consists of two modules: a near-chip filtering module and a detection module.
The near-chip filtering enabled a reduction of up to
in bandwidth, enabling the use of dvs for low-bandwidth communication links. Our architecture uses few resources on a low-end fpga. The main bottleneck in our design was the number of BRAMs. We estimated that our module has a power consumption of
It was shown that, despite significant reduction in size, this representation is still useful for learning. Additionally, it was shown that a centralized detection module may process this representation and detect pedestrians in the scene. The computational complexity of the detection algorithm is low because (i) we use a shallow network with a low number of feature detectors, and (ii) the use of a binary network representation reduces the execution time on a low-end microcontroller.
Some future work for this investigation involves implementing applications fully on the edge. This would require integrating the filtering algorithm along with the detection algorithm. Therefore, one would benefit from the additional bandwidth savings of only sending the detection result over the wire rather than a representation. Due to the optimized nature of our filter for a single detection task, we expect to get better results compared to [luca2019-pcarect].
Another direction for future work would be an on-chip asic implementation combining our filtering algorithm with the dvs. This would produce additional power savings along with bandwidths savings, by removing the need to go off-sensor for filtering.
Finally, an interesting approach for this work would be to perform classification based on multiple dvs streams from different cameras. As the output of the filter is lightweight, we could imagine using multiple sensors for a single classification performed on the low complexity edge compute.