Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge
Heitor Medeiros
Masih Aminbeidokhti
Fidel A. G. Pena
David Latortue
Eric Granger
Marco Pedersoli
ECCV 2024
[GitHub]
[Paper]
[Poster]




Abstract

A common practice in deep learning involves training large neural networks on massive datasets to achieve high accuracy across various domains and tasks. While this approach works well in many application areas, it often fails drastically when processing data from a new modality with a significant distribution shift from the data used to pre-train the model. This paper focuses on adapting a large object detection model trained on RGB images to new data extracted from IR images with a substantial modality shift. We propose Modality Translator (ModTr) as an alternative to the common approach of fine-tuning a large model to the new modality. ModTr adapts the IR input image with a small transformation network trained to directly minimize the detection loss. The original RGB model can then work on the translated inputs without any further changes or fine-tuning to its parameters. Experimental results on translating from IR to RGB images on two well-known datasets show that our simple approach provides detectors that perform comparably or better than standard fine-tuning, without forgetting the knowledge of the original model. This opens the door to a more flexible and efficient service-based detection pipeline, where a unique and unaltered server, such as an RGB detector, runs constantly while being queried by different modalities, such as IR with the corresponding translations model.


Try our code



Bounding box predictions over different adaptations of the RGB detector (Faster R-CNN) for IR images on two benchmarks: LLVIP and FLIR. Yellow and red boxes show the ground truth and predicted detections, respectively. In a) we see the RGB data. In b) FastCUT is an unsupervised image translation approach that takes as input infrared images (IR) and produces pseudo-RGB images. It does not focus on detection and requires both modalities for training. In c) we have fine-tuning, which is the standard approach to adapting the detector to the new modality. It requires only IR data but forgets the original knowledge of the original RGB detector. Finally, in d) is the ModTr, which focuses the translation on detection, requires only IR data and does not forget the original knowledge so that it can be reused for other tasks. Bounding box predictions for other detectors are provided in the supplementary material.


Paper and Supplementary Material

Heitor Rapela Medeiros, Masih Aminbeidokhti, Fidel A. Guerrero Pena, David Latortue, Eric Granger, Marco Pedersoli

Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge.
In ECCV, 2024.

(hosted on ECCV2024)

[Bibtex]

Experiments and Results

- Comparison with Translation Approaches.



Table 1: Detection performance (AP) of ModTr versus baseline image-to-image methods to translate the IR to RGB-like images, using three different detectors (FCOS, RetinaNet, and Faster R-CNN). The methods were evaluated on IR test set of LLVIP and FLIR datasets. The RGB column indicates if the method required access to RGB images during training, and Box refers to the use of ground truth boxes during training.

- Translation vs. Fine-tuning.



Table 2: Detection performance (AP) of ModTr versus baseline fine-tuning (FT) of the detector, FT of the head and LoRA [18], using three different detectors (FCOS, RetinaNet, and Faster RCNN. The methods were evaluated on IR test set of LLVIP and FLIR datasets. Results with "-" diverged from the optimization.

- Different Backbones for ModTr.



Table 3: Detection performance (AP) of ModTr with different backbones for the translation networks with different numbers of parameters, using three different detectors (FCOS, RetinaNet, and Faster R-CNN). The methods were evaluated on IR test set of LLVIP and FLIR datasets.

- Knowledge Preservation through Input Modality Translation.



Table 4: Detection performance (AP) of knowledge preserving techniques N-Detectors, 1- Detector, and N-ModTr-1-Detector, using three different detectors (FCOS, RetinaNet, and Faster R-CNN). The methods were evaluated on COCO and IR test sets of LLVIP and FLIR datasets.

- Visualization of ModTr Translated Images.



Figure 3: Illustration of a sequence of 8 images of LLVIP and FLIR dataset for Faster R-CNN. For each dataset, the first row is the RGB modality, followed by the IR modality and different representations created by ModTr.

- Fine-tuning of ModTr and the Detector.



Figure 4: Comparison of the performance of fine-tuning the ModTr and normal fine-tuning on the FLIR dataset for the three different detectors (FCOS, RetinaNet, and Faster R-CNN). In blue, the Fine-tuning; in orange, the ModTr⊙, and in green, ModTr⊙ + FT.






Acknowledgements

This work was supported in part by Distech Controls Inc., the Natural Sciences and Engineering Research Council of Canada, the Digital Research Alliance of Canada, and MITACS.