Optical Flow Method
The optical flow method exploits the temporal variations of pixels in an image sequence and the correlation between consecutive frames to calculate the motion information of objects between adjacent frames, which is based on the correspondence between the previous frame and the current frame. Optical flow can be used to estimate and analyze the motion of objects in a sequence, with the existing methods divided into traditional and deep learning algorithms. Lucas et al. (1981) proposed the Lucas-Kanade sparse optical flow algorithm (Bruhn et al., 2005), which exploits brightness constancy, temporal persistence, and spatial consistency. Bouguet introduced an improved Lucas-Kanade algorithm (Bouguet et al., 2001) based on pyramid hierarchies, overcoming the issues of tracking fast-moving objects and affine transformations. Another traditional approach is the dense optical flow, such as Farnebäck’s method (Farnebäck, 2003), which approximates the neighborhood information of each pixel using polynomials and calculates the displacement for all points in the image. However, the trade-off between the accuracy and speed of this method limits its practical application. Deep learning (Behera et al., 2023; Li et al., 2022; Tembhurne et al., 2022; Zhou, 2022) has yielded promising results in optical flow estimation in recent years. For instance, FlowNets and RAFT (Boyer et al., 2009; Dosovitskiy et al., 2015; Ilg et al., 2017) utilized convolutional neural networks to predict optical flow for each pixel in the image and achieved significant advancements in real-time estimation algorithms. However, for the bats in table tennis training or competition videos, their positions constantly change during the motion, causing rapid transformations and resulting in much motion blur. Therefore, extracting features from such scenarios is too challenging.