Pixel-Aligned Recurrent Queries for Multi-View 3D Object Detection

1Northeastern University, 2California Institute of Technology, 3Meta Reality Labs Research
* Equal advising
ICCV 2023

PARQ detects 3D objects from a short video snippet.

Hungarian matching with 3D Intersection-over-Union (IoU) is used to match predictions between the current and the previous snippet. The white bounding boxes are the ground truth. (Data is from ScanNet scene0169_00.)

Abstract

We present PARQ – a multi-view 3D object detector with transformer and pixel-aligned recurrent queries. Unlike previous works that use learnable features or only encode 3D point positions as queries in the decoder, PARQ leverages appearance-enhanced queries initialized from reference points in 3D space and updates their 3D location with recurrent cross-attention operations. Incorporating pixel-aligned features and cross attention enables the model to encode the necessary 3D-to-2D correspondences and capture global contextual information of the input images. PARQ outperforms prior best methods on the ScanNet and ARKitScenes datasets, learns and detects faster, is more robust to distribution shifts in reference points, can leverage additional input views without retraining, and can adapt inference compute by changing the number of recurrent iterations.

Overview Video

BibTeX

@inproceedings{xie2023parq,
  title={Pixel-Aligned Recurrent Queries for Multi-View {3D} Object Detection},
  author={Xie, Yiming and Jiang, Huaizu and Gkioxari, Georgia and Straub, Julian},
  booktitle={ICCV},
  year={2023}
}