(Static)
A white dog with a woman nearby.
(Hybrid) A woman in a striped dress eat an ice-cream with a fork.
(Static)
A man wearing yellow clothes.
(Dynamic) A goldfish is being placed into a water basin by a man.
(Static) A man wearing sunglasses sits at the rear of the boat.
(Hybrid) A man in black plays with a black-and-red basketball.
(Dynamic) The boy tries to climb over the headboard.
(Dynamic)
The cat crawls into the bag.
Referring video object segmentation (RVOS) aims to identify, track and segment the objects in a video based on language descriptions. To advance the task towards more practical scenarios, we introduce Long-RVOS, a large-scale benchmark for long-term referring video object segmentation. Long-RVOS contains 2,000+ videos of an average duration exceeding 60 seconds, covering a variety of objects that undergo occlusion, disappearance-reappearance and shot changing. The objects are manually annotated with three different types of descriptions: static, text_dynamic and text_hybrid. Moreover, unlike previous benchmarks that rely solely on the per-frame spatial evaluation, we introduce two new metrics to assess the temporal and spatiotemporal consistency. We further propose ReferMo, a promising baseline method that integrates motion information to expand the temporal receptive field, and employs a local-to-global architecture to capture both short-term text_dynamics and long-term dependencies. We hope that Long-RVOS and our baseline can drive future RVOS research towards tackling more realistic and long-form videos.
A video is decomposed into clips (keyframe + motion frames). ReferMo perceives the static attributes and short-term motions within each clip, then aggregates inter-clip information capture the global target. Notably, ReferMo is supervised by only keyframe masks, and SAM2 is only used at inference for target tracking in subsequent frames.
@article{liang2025longrvos,
title = {Long-RVOS: A Comprehensive Benchmark for Long-term Referring Video Object Segmentation},
author = {Liang, Tianming and Jiang, Haichao and Yang, Yuting and Tan, Chaolei and Li, Shuai and Zheng, Wei-Shi and Hu, Jian-Fang},
journal = {arXiv preprint arXiv:2505.12702},
year = {2025}
}