FleVRS: Towards Flexible Visual Relationship Segmentation

NeurIPS 2024

1Northeastern University, 2Microsoft Research
FleVRS is a single model trained to support standard, promptable and open-vocabulary fine-grained visual relationship segmentation (subject mask, relationship categories, object mask). It can take images only or images with structured prompts as inputs, and segment all existing relationships or the ones subject to the text prompts.

Introduction

Visual relationship understanding has been studied separately in human-object interaction(HOI) detection, scene graph generation(SGG), and referring relationships(RR) tasks. Given the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner. In this work, we propose FleVRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. FleVRS leverages the synergy between text and image modalities, to ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding. Empirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 mAP on HICO-DET, +11.4 Acc on VRD, +4.7 mAP on unseen HICO-DET. Our FleVRS represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.

Structure

Overview of FleVRS.
Standard VRS: without textual queries, the latent queries perform self- and cross-attention within the relationship decoder to output a triplet for each query.
Promptable VRS: the decoder additionally incorporates textual queries \( \mathbf{Q}^{\mathbf{t}} \), concatenated with latent queries \( \mathbf{Q}^{\mathbf{v}} \). This setup similarly predicts triplets, each based on \( \mathbf{Q}^{\mathbf{v}} \) outputs aligned with features from the optional textual prompt \( \mathbf{Q}^{\mathbf{t}} \).
Open-vocabulary VRS: it computes matching scores between predicted class embeddings and textual features from CLIP, facilitating open-vocabulary VRS.

Qualitative Results

For flexible visual relationship segmentation, we show visualizations of subject masks, object masks and relationship category outputs, given three types of text prompts. In (c), we show the predicted predicates in bold characters. Unseen objects and predicates are denoted in red characters.

Promptable VRS on HICO-DET dataset.
Promptable VRS on PSG dataset.

Quantitative Results

We show quantitative results of standard VRS on HICO-DET, V-COCO, PSG datasets and open-vocabulary VRS on HICO-DET dataset. Please check our paper 📄 for more details.

Standard VRS on HICO-DET dataset.
Standard VRS on V-COCO dataset.
Standard VRS on PSG dataset.
Open-vocabulary VRS on HICO-DET dataset.

Citation

If you found our work useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!

 @inproceedings{zhu2024towards,
      author = {Zhu, Fangrui and Yang, Jianwei and Jiang, Huaizu},
      title = {Towards Flexible Visual Relationship Segmentation},
      booktitle = {NeurIPS},
      year = {2024}
    }