A multi-space approach to zero-shot object detection

No Thumbnail Available

Date

2020

Authors

Gupta D.
Anantharaman A.
Mamgain N.
Sowmya Kamath S.
Balasubramanian V.N.
Jawahar C.V.

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Object detection has been at the forefront for higher level vision tasks such as scene understanding and contextual reasoning. Therefore, solving object detection for a large number of visual categories is paramount. Zero-Shot Object Detection (ZSD) - where training data is not available for some of the target classes - provides semantic scalability to object detection and reduces dependence on large amount of annotations, thus enabling a large number of applications in real-life scenarios. In this paper, we propose a novel multi-space approach to solve ZSD where we combine predictions obtained in two different search spaces. We learn the projection of visual features of proposals to the semantic embedding space and class labels in the semantic embedding space to visual space. We predict similarity scores in the individual spaces and combine them. We present promising results on two datasets, PASCAL VOC and MS COCO. We further discuss the problem of hubness and show that our approach alleviates hubness with a performance superior to previously proposed methods. © 2020 IEEE.

Description

Keywords

Citation

Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020 , Vol. , , p. 1198 - 1206

Endorsement

Review

Supplemented By

Referenced By