Open-Vocabulary Temporal Action Localization using Multimodal Guidance


Akshita Gupta (University of Guelph), Aditya Arora (York University), Sanath Narayan (Technology Innovation Institute), Salman Khan (Mohamed bin Zayed University of Artificial Intelligence), Fahad Shahbaz Khan (Mohamed bin Zayed University of Artificial Intelligence), Graham W. Taylor (University of Guelph)
The 35th British Machine Vision Conference

Abstract

Open-Vocabulary Temporal Action Localization (OVTAL) enables a model to recognize any desired action category in videos without the need to explicitly curate training data for all categories. However, this flexibility poses significant challenges, as the model must recognize not only the action categories seen during training but also novel categories specified at inference. Unlike standard temporal action localization, where training and test categories are predetermined, OVTAL requires understanding contextual cues that reveal the semantics of novel categories. To address these challenges, we introduce OVFormer, a novel open-vocabulary framework extending ActionFormer with three key contributions. First, we employ task-specific prompts as input to a large language model to obtain rich class-specific descriptions for action categories. Second, we introduce a cross-attention mechanism to learn the alignment between class representations and frame-level video features, facilitating the multimodal guided features. Third, we propose a two-stage training strategy which includes training with a larger vocabulary dataset and finetuning to downstream data to generalize to novel categories. OVFormer extends existing TAL methods to open-vocabulary settings. Comprehensive evaluations on the THUMOS14 and ActivityNet-1.3 benchmarks demonstrate the effectiveness of our method. Our code is available at https://github.com/adityac8/OVFormer.

Citation

@inproceedings{Gupta_2024_BMVC,
author    = {Akshita Gupta and Aditya Arora and Sanath Narayan and Salman Khan and Fahad Shahbaz Khan and Graham W. Taylor},
title     = {Open-Vocabulary Temporal Action Localization using Multimodal Guidance},
booktitle = {35th British Machine Vision Conference 2024, {BMVC} 2024, Glasgow, UK, November 25-28, 2024},
publisher = {BMVA},
year      = {2024},
url       = {https://papers.bmvc2024.org/1013.pdf}
}


Copyright © 2024 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection