Visual relationship understanding has been studied separately in human-object interaction(HOI) detection, scene graph generation(SGG), and referring relationships(RR) tasks. Given the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner. In this work, we propose FleVRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. FleVRS leverages the synergy between text and image modalities, to ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding. Empirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 mAP on HICO-DET, +11.4 Acc on VRD, +4.7 mAP on unseen HICO-DET. Our FleVRS represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.
For flexible visual relationship segmentation, we show visualizations of subject masks, object masks and relationship category outputs, given three types of text prompts. In (c), we show the predicted predicates in bold characters. Unseen objects and predicates are denoted in red characters.
We show quantitative results of standard VRS on HICO-DET, V-COCO, PSG datasets and open-vocabulary VRS on HICO-DET dataset. Please check our paper 📄 for more details.
If you found our work useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!
@inproceedings{zhu2024towards,
author = {Zhu, Fangrui and Yang, Jianwei and Jiang, Huaizu},
title = {Towards Flexible Visual Relationship Segmentation},
booktitle = {NeurIPS},
year = {2024}
}