Drones used for border surveillance usually fly at high altitudes and embed a high-resolution image acquisition system that can cover a large land surface. Thus, targets to be detected are usually very small, sometimes only a few pixels wide (Figure 1). Surveillance drones usually acquire colored images (i.e. 3 channels data) or thermal/infrared images (i.e. mono-channel data). Required processing frame rate depends on the velocity and the kind of event to be detected or processing to be achieved, it is usually between one frame per second and a dozen frames per second. One of the important constraints is related to the power consumption of the processing solution. Due to the limited battery capacity of the drones, the power budget allocated to the whole processing system must remain under 30 Watt, including the different navigation controllers and the processing dedicated to the drone mission.

Figure 1: Example of a picture containing a ship not far from the coast (image from MASATIv2 dataset [1] [2], obtained from Microsoft® Bing™ Maps).
From an algorithmic point of view, the drone should be able to execute both segmentation and classification applications. Thus, it must support network layers used to compose neural networks that are efficient in resolving such problems, such as Convolutional Neural Networks (CNNs). An important feature is that executed applications can change during the drone mission, through either modifying weights or modifying both topologies and weights. In this project, as an application use-case, Thales considers a boat classification application for maritime areas.
An hybrid execution platform: due to strong constraints on power consumption and power efficiency, the application will need specific hardware tailored to the algorithms. Thales proposed to connect the eProcessor chip to its own CNN accelerator implemented on an FPGA, thanks to a memory coherent link developed in the project by other partners (FORTH, Chalmers and Extoll). Hence, the surveillance border control application is meant to run on the eProcessor core and on the off-chip CNN accelerator. The off-chip CNN accelerator can thus support the execution of the CNN neural network, while some specific layers will remain on the eProcessor core and could be accelerated thanks to parallel software frameworks such as OpenMP and eProcessor specific accelerators. The eProcessor chip is also meant to execute other parts of the application that are not pure artificial intelligence algorithm.
A CNN accelerator: in the eProcessor project, Thales proposed to use its own CNN accelerator IP, which is a programmable 2D systolic architecture, based on static and dedicated hardware functions. The programming flow is compatible with AI frameworks such as TensorFlow. The IP can be generated using a custom toolchain, and is targeting FPGA platforms. A particular CNN (topology) is trained in a classical supervised way using the TensorFlow framework, based on an annotated database. Then, a compiler developed for the CNN IP generates the firmware that is executable by the CNN IP. A specific API library that can be instantiated within C++/OpenMP code to control and communicate with the IP.
eProcessor: integrating the Thales CNN accelerator in a memory-coherent way with the eProcessor chip is a really challenging task and Thales started to explore how to enable memory coherence together with its CNN accelerator using a traditional FPGA evaluation platform based on an Arm subsystem. This emulation platform enables to have first results on both the software and hardware sides, before the complete coherence mechanism is available on the multicore version of the eProcessor chip. Porting small CNN models onto a RISC-V Quemu SDV (Software Development Vehicule), generated using a set of scripts developed by FORTH, enabled to validate the software toolchain for CNN quantization on the RISC-V instruction set. The next step for this activity would be to interconnect the FPGA board containing the CNN accelerator to the real eProcessor ASIC to form the hybrid execution platform and demonstrate the advantages of memory-coherency and energy-efficiency for the use case.
[1] A.-J. Gallego, A. Pertusa, and P. Gil, “Automatic Ship Classification from Optical Aerial Images with Convolutional Neural Networks”, Remote Sensing, vol. 10, issue 4, 2018.
[2] S. Alashhab, A.-J. Gallego, A. Pertusa, and P. Gil, “Precise Ship Location With CNN Filter Selection From Optical Aerial Images”, IEEE Access, vol. 7, 2019.