Query-Guided Refinement and Dynamic Spans Network for Video Highlight Detection and Temporal Grounding in Online Information Systems

Query-Guided Refinement and Dynamic Spans Network for Video Highlight Detection and Temporal Grounding in Online Information Systems

Yifang Xu, Yunzhuo Sun, Zien Xie, Benxiang Zhai, Youyao Jia, Sidan Du
Copyright: © 2023 |Pages: 20
DOI: 10.4018/IJSWIS.332768
Article PDF Download
Open access articles are freely available for download

Abstract

With the surge in online video content, finding highlights and key video segments have garnered widespread attention. Given a textual query, video highlight detection (HD) and temporal grounding (TG) aim to predict frame-wise saliency scores from a video while concurrently locating all relevant spans. Despite recent progress in DETR-based works, these methods crudely fuse different inputs in the encoder, which limits effective cross-modal interaction. To solve this challenge, the authors design QD-Net (query-guided refinement and dynamic spans network) tailored for HD&TG. Specifically, they propose a query-guided refinement module to decouple the feature encoding from the interaction process. Furthermore, they present a dynamic span decoder that leverages learnable 2D spans as decoder queries, which accelerates training convergence for TG. On QVHighlights dataset, the proposed QD-Net achieves 61.87 HD-HIT@1 and 61.88 TG-mAP@0.5, yielding a significant improvement of +1.88 and +8.05, respectively, compared to the state-of-the-art method.
Article Preview
Top

Introduction

The rapid advancement of artificial intelligence has significantly elevated video content creation technologies, resulting in tens of millions of new videos being uploaded to online platforms daily (Taleb & Abbas, 2022; Abbas et al., 2021). Given this vast volume of content, users urgently desire to see highlights or retrieve precise frames in a video that are most pertinent to a given textual query, allowing them to quickly skip to relevant video segments (Hamza et al., 2022; Sahoo & Gupta, 2021). In this paper, we focus on two video understanding tasks: highlight detection (HD) and temporal grounding (TG), as depicted in Fig. 1. Given a video paired with its corresponding natural language query, the objective of HD is to predict highlights for each video clip (Y. Liu et al., 2022). TG aims to retrieve all spans in a video that are most relevant to the query, where each span consists of a start and end clip (Gao et al., 2017). Since the goal of both tasks is to find the most appropriate clip, recent work (Lei et al., 2021) proposes the QVHighlights dataset to conduct HD and TG concurrently.

Figure 1.

A depiction of HD&TG. Given a video paired with its corresponding textual query, the goal of HD&TG is to predict frame-wise saliency scores and locate all the most relevant spans simultaneously

IJSWIS.332768.f01
Figure 2.

Comparison between Moment-DETR (a) and QD-Net (b)

IJSWIS.332768.f02

The primary challenge of the HD&TG task lies in effectively generating cross-modal features that contain query-related information, since such features are utilized to predict highlights and locate the query-matched spans. Inspired by DETR (Carion et al., 2020), Moment-DETR (Lei et al., 2021) designed a transformer encoder-decoder pipeline to tackle this challenge, as shown in Fig. 2 (a). However, Moment-DETR opts to directly concatenate video and text for coarse fusion in the encoder. This approach mixes intra-modal contextual modeling with cross-modal feature interaction. When the similarity between video frames far surpasses the video-query similarity, the resulting cross-modal features are irrelevant to the query, leading to diminished performance. Moreover, tasks like object detection (OD) and TG both necessitate decoder-based localization. Recent DETR-based research (S. Liu et al., 2022) indicates that utilizing dynamic bounding box anchors as queries within the decoder helps alleviate the problem of slow convergence in OD training. Yet, Moment-DETR solely employs learnable embeddings in the decoder and lacks adequate temporal span modeling, which hinders convergence speed and accuracy for a given TG task.

In this paper, we newly propose a HD&TG model named QD-Net (Query-guided refinement and Dynamic spans Network) to tackle the above issues. As shown in Figure 2(b), QD-Net decouples the feature encoding and interaction processes using a query-guided refinement module. This module fuses video and text tokens, which produce query-relevant cross-modal features. To capture intra-modal context from the global perspective, we introduce the straightforward yet efficient PoolFormer (Yu et al., 2022), which is applied to both visual and text encoders. In addition, we design a span decoder, which can more explicitly associate learnable embeddings with predicted span positions and speed up training convergence for the TG task. Specifically, the decoder contains learnable 2D spans that are dynamically updated at each layer, and their size can modulate the cross-attention weights within the decoder. To demonstrate the superiority of QD-Net, we execute comprehensive experiments and ablations on three publicly accessible datasets (QVHighlights, TVSum, and Charades-STA). The results reveal that QD-Net outperforms current state-of-the-art (SOTA) approaches. Notably, on the QVHighlights dataset, our model scores 61.87 HD-HIT@1 and 61.88 TG-mAP@0.5, showing gains of +1.88 and +8.05 over the SOTA method. In summary, our principal contributions include:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing