In this paper, we introduce a challenging new dataset, MLB-YouTube, designed for fine-grained activity detection. The dataset contains two settings: segmented video classification as well as activity detection in continuous videos. We experimentally compare various recognition approaches capturing temporal structure in activity videos, by classifying segmented videos and extending those approaches to continuous videos. We also compare models on the extremely difficult task of predicting pitch speed. Pitch type from broadcast baseball videos. We find that learning temporal structure is valuable for fine-grained activity recognition.
Activity recognition is an important problem in computer vision with many applications within sports. Every major professional sporting event is recorded for entertainment purposes, but is also used for analysis by coaches, scouts, and media analysts. Many game statistics are currently manually tracked, but could be replaced by computer vision systems. Recently, the MLB has used the PITCHf/x and Statcast systems that are able to automatically capture pitch speed and motion. These systems use multiple high-speed cameras. Radar to capture detailed measurements for every player on the field. However, much of this data is not publicly available.
In this paper, we introduce a new dataset, MLB-YouTube, which contains densely annotated frames with activities from broadcast baseball videos. Unlike many existing activity recognition or detection datasets, ours focuses on fine-grained activity recognition. As shown in Fig. 1, the scene structure is very similar between activities, often the only difference is the motion of a single person. Additionally, we only have a single camera viewpoint to determine the activity. We experimentally compare various approaches for temporal feature pooling for both segmented video classification as well as activity detection in continuous videos.
2 Related Works
Activity recognition has been a popular research topic in computer vision [1, 10, 20, 25, 16]. Hand-crafted features, such as dense trajectories [25] gave promising results on many datasets. More recent works have focused on learning CNNs for activity recognition [3, 22]. Two-stream CNNs take spatial RGB frames and optical flow frames as input [20, 7]. 3D XYT convoltuional models have been trained to learn spatio-temporal features [22, 3, 23, 8]. To train these CNN models, large scale datasets such as Kinetics [11], THUMOS [9], and ActivityNet [6] have been created.
Many works have explored temporal feature aggregation for activity recognition. Ng et al. [13] compared various pooling methods and found that LSTMs and max-pooling the entire video performed best. Ryoo et al. [17] found that pooling intervals of different locations/lengths was beneficial to activity recognition. Piergiovanni et al. [14] found that learning important sub-event intervals and using those for classification improved performance.
Recently, segment-based 3D CNNs have been used to capture spatio-temporal information simultaneously for activity detection [26, 19, 18]. These approaches all rely on the 3D CNN to capture temporal dynamics, which usually only contain 16 frames. Some works have studied longer-term temporal structures [3, 10, 13, 24], but it was generally done with a temporal pooling of local representations or (spatio-)temporal convolutions with larger fixed intervals. Recurrent neural networks (RNNs) also have been used to model activity transitions between frames [27, 28, 5].
3 MLB-YouTube Dataset
We created a large-scale dataset consisting of 20 baseball games from the 2017 MLB post-season available on YouTube with over 42 hours of video footage. Our dataset consists of two components: segmented videos for activity recognition. Continuous videos for activity classification. Our dataset is quite challenging as it is created from TV broadcast baseball games where multiple different activities share the camera angle. Further, the motion/appearance difference between the various activities is quite small (e.g., the difference between swinging the bat and bunting is very small), as shown in Fig. 2. Many existing activity detection datasets, such as THUMOS [9] and ActivityNet [6], contain a large variety of activities that vary in setting, scale, and camera angle. This makes even a single frame from one activity (e.g., swimming) to be very different from that of another activity (e.g., basketball). On the other hand, a single frame from one of our baseball videos is often not enough to classify the activity.
Fig. 3 shows the small difference between a ball and strike. To distinguish these activities requires detecting if the batter swings or not, or detecting the umpire’s signal (Fig. 4) for a strike, or no signal for a ball. Further complicating this task is that the umpire can be occluded by the batter or catcher. Each umpire has a unique way to signal a strike.
Our segmented video dataset consists of 4,290 video clips. Each clip is annotated with the various baseball activities that occur, such as swing, hit, ball, strike, foul, etc. A video clip can contain multiple activities, so we treat this as a multi-label classification task. A full list of the activities and the number of examples of each is shown in Table 1. We additionally annotated each clip containing a pitch with the pitch type (e.g., fastball, curveball, slider, etc.) and the speed of the pitch. We also collected a set of 2,983 hard negative examples where no action occurs. These examples include views of the crowd, the field, or the players standing before or after a pitch occurred. Examples of the activities. Hard negatives are shown in Fig. 2.
Our continuous video dataset consists of 2,128 1-2 minute long clips from the videos. Each video frame is annotated with the baseball activities that occur. Each continuous clip contains on average of 7.2 activities, resulting in a total of over 15,000 activity instances. Our dataset and models are avaiable at https://github.com/piergiaj/mlb-youtube/
4 Segmented Video Recognition Approach
We explore various methods of temporal feature aggregation for segmented video activity recognition. With segmented videos, the classification task is much easier as every frame (in the video) corresponds to the activity. The model does not need to determine when an activity begins and ends. The base component of our approaches is based on a CNN providing a per-frame (or per-segment) representation. We obtain this from standard two-stream CNNs [20, 7] using a recent deep CNNs such as I3D [3] or InceptionV3 [21].
Given vݑ£vitalic_v, the T×Dݑ‡Ý·T\times Ditalic_T × italic_D features from a video, where Tݑ‡Titalic_T is the temporal length of the video and DÝ·Ditalic_D is the dimensionality of the feature, the standard method for feature pooling is max- or mean-pooling over the temporal dimension followed by a fully-connected layer to classify the video clip [13], as shown in Fig. 5(a). However, this provides only one representation for the entire video, and loses valuable temporal information. One way to address this is to use a fixed temporal pyramid of various lengths, as shown in Fig 5(b). We divide the input video into intervals of various lengths (1/2, 1/4, and 1/8), and max-pool each interval. We concatenate these pooled features together, resulting in a K×DݾݷK\times Ditalic_K × italic_D representation (KݾKitalic_K is the number of intervals in the temporal pyramid), and use a fully-connected layer to classify the clip.
We also try learning temporal convolution filters, which can learn to aggregate local temporal structure. The kernel size is L×1Ý¿1L\times 1italic_L × 1 and it is applied to each frame. This allows each timestep representation to contain information from nearby frames. We then apply max-pooling over the output of the temporal convolution and use a fully-connected layer to classify the clip, shown in Fig.
While temporal pyramid pooling allows some structure to be preserved, the intervals are predetermined and fixed. Previous works have found learning the sub-interval to pool was beneficial to activity recognition [14]. The learned intervals are controlled by 3 learned parameters, a center gݑ”gitalic_g, a width σݜŽ\sigmaitalic_σ and a stride δݛ¿\deltaitalic_δ used to parameterize NÝ‘ÂNitalic_N Gaussians. Given Tݑ‡Titalic_T, the length of the video, we first compute the locations of the strided Gaussians as:
gn=0.5â‹…Tâ‹…(g~n+1)δn=TN-1δ~nμni=gn+(i-0.5N+0.5)δnsubscriptݑ”ݑ›⋅0.5ݑ‡subscript~ݑ”ݑ›1subscriptݛ¿ݑ›ݑ‡ݑÂ1subscript~ݛ¿ݑ›superscriptsubscriptݜ‡ݑ›ݑ–subscriptݑ”ݑ›ݑ–0.5Ý‘Â0.5subscriptݛ¿ݑ›\beginsplitg_n&=0.5\cdot T\cdot(\widetildeg_n+1)\\ \delta_n&=\fracTN-1\widetilde\delta_n\\ \mu_n^i&=g_n+(i-0.5N+0.5)\delta_n\\ \endsplitstart_ROW start_CELL italic_g start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_CELL start_CELL = 0.5 â‹… italic_T â‹… ( over~ start_ARG italic_g end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT + 1 ) end_CELL end_ROW start_ROW start_CELL italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_CELL start_CELL = divide start_ARG italic_T end_ARG start_ARG italic_N - 1 end_ARG over~ start_ARG italic_δ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_μ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT end_CELL start_CELL = italic_g start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT + ( italic_i - 0.5 italic_N + 0.5 ) italic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_CELL end_ROW (1)
The filters are then created as:
Fm[i,t]=1Zmexp(-(t-μmi)22σm2)i∈0,1,…,N-1,t∈0,1,…,T-1formulae-sequencesubscriptݹݑšݑ–ݑ¡1subscriptÝ‘Âݑšsuperscriptݑ¡superscriptsubscriptݜ‡ݑšݑ–22superscriptsubscriptݜŽݑš2ݑ–01…ݑÂ1ݑ¡01…ݑ‡1\beginsplitF_m[i,t]&=\frac1Z_m\exp(-\frac(t-\mu_m^i)^22% \sigma_m^2)\\ &i\in\0,1,\ldots,N-1\,{}t\in\0,1,\ldots,T-1\\\ \endsplitstart_ROW start_CELL italic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT [ italic_i , italic_t ] end_CELL start_CELL = divide start_ARG 1 end_ARG start_ARG italic_Z start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG roman_exp ( - divide start_ARG ( italic_t - italic_μ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 italic_σ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL italic_i ∈ 0 , 1 , … , italic_N - 1 , italic_t ∈ 0 , 1 , … , italic_T - 1 end_CELL end_ROW (2)
where ZmsubscriptÝ‘ÂݑšZ_mitalic_Z start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT is a normalization constant.
We apply FݹFitalic_F to the T×Dݑ‡Ý·T\times Ditalic_T × italic_D video representation by matrix multiplication, resulting in a N×DÝ‘ÂÝ·N\times Ditalic_N × italic_D representation which is used as input to a fully connected layer for classification. This method is shown in Fig 5(d).
Other works have used LSTMs [13, 4] to model temporal structure in videos. We also compare to a bi-directional LSTM with 512 hidden units where we use the last hidden state as input to a fully-connected layer for classification.
We formulate our tasks as multi-label classification and train these models to minimize binary cross entropy:
L(v)=∑czclog(p(c|G(v)))+(1-zc)log(1-p(c|G(v)))Ý¿ݑ£subscriptÝ‘Âsubscriptݑ§ݑÂÝ‘ÂconditionalÝ‘Âݺݑ£1subscriptݑ§ݑÂ1Ý‘ÂconditionalÝ‘Âݺݑ£L(v)=\sum_cz_c\log(p(c|G(v)))+(1-z_c)\log(1-p(c|G(v)))italic_L ( italic_v ) = ∑ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT roman_log ( italic_p ( italic_c | italic_G ( italic_v ) ) ) + ( 1 - italic_z start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) roman_log ( 1 - italic_p ( italic_c | italic_G ( italic_v ) ) ) (3)
Where G(v)ݺݑ£G(v)italic_G ( italic_v ) is the function that pools the temporal information (i.e., max-pooling, LSTM, temporal convolution, etc.), and zcsubscriptݑ§ݑÂz_citalic_z start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is the ground truth label for class cÝ‘Âcitalic_c.
5 Activity Detection in Continuous Videos
Activity detection in continuous videos is a more challenging problem. Here, our objective is to classify each frame with the occurring activities. Unlike segmented videos, there are multiple instances of activities occurring sequentially, often separated by frames with no activity. This requires the model to learn to detect the start and end of activities. As a baseline, we train a single fully-connected layer as a per-frame classifier. This method uses no temporal information not present in the features.
We extend the approaches presented for segmented video classification to continuous videos by applying each approach in a temporal sliding window fashion. To do this, we first pick a fixed window duration (i.e., a temporal window of LÝ¿Litalic_L features). We apply max-pooling to each window (as in Fig. 5(a)) and classify each pooled segment.
We can similarly extend temporal pyramid pooling. Here, we split the window of length LÝ¿Litalic_L into segments of length L/2,L/4,L/8Ý¿2Ý¿4Ý¿8L/2,L/4,L/8italic_L / 2 , italic_L / 4 , italic_L / 8, this results in 14 segments for each window. We apply max-pooling to each segment. Concatenate the pooled features together. This gives a 14×D14Ý·14\times D14 × italic_D-dim representation for each window which is used as input to the classifier.
For temporal convolutional models on continuous videos, we slightly alter the segmented video approach. Here, we learn a temporal convolutional kernel of length LÝ¿Litalic_L and convolve it with the input video features. This operation takes input of size T×Dݑ‡Ý·T\times Ditalic_T × italic_D and produces output of size T×Dݑ‡Ý·T\times Ditalic_T × italic_D. We then apply a per-frame classifier on this representation. This allows the model to learn to aggregate local temporal information.
To extend the sub-event model to continuous videos, we follow the approach above, but set T=Lݑ‡Ý¿T=Litalic_T = italic_L in Eq. 1. This results in filters of length LÝ¿Litalic_L. Given vݑ£vitalic_v, the T×Dݑ‡Ý·T\times Ditalic_T × italic_D video representation, we convolve (instead of using matrix multiplication) the sub-event filters, FݹFitalic_F, with the input, resulting in a N×D×TÝ‘ÂÝ·ݑ‡N\times D\times Titalic_N × italic_D × italic_T-dim representation. We use this as input to a fully-connected layer to classify each frame.
We train the model to minimize the per-frame binary classification:
L(v)=∑t,czt,clog(p(c|H(vt)))+(1-zt,c)log(1-p(c|H(vt)))Ý¿ݑ£subscriptݑ¡ݑÂsubscriptݑ§ݑ¡ݑÂÝ‘ÂconditionalÝ‘ÂÝ»subscriptݑ£ݑ¡1subscriptݑ§ݑ¡ݑÂ1Ý‘ÂconditionalÝ‘ÂÝ»subscriptݑ£ݑ¡\beginsplitL(v)=\sum_t,c&z_t,c\log(p(c|H(v_t)))+\\ &(1-z_t,c)\log(1-p(c|H(v_t)))\endsplitstart_ROW start_CELL italic_L ( italic_v ) = ∑ start_POSTSUBSCRIPT italic_t , italic_c end_POSTSUBSCRIPT end_CELL start_CELL italic_z start_POSTSUBSCRIPT italic_t , italic_c end_POSTSUBSCRIPT roman_log ( italic_p ( italic_c | italic_H ( italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ) ) + end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL ( 1 - italic_z start_POSTSUBSCRIPT italic_t , italic_c end_POSTSUBSCRIPT ) roman_log ( 1 - italic_p ( italic_c | italic_H ( italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ) ) end_CELL end_ROW (4)
where vtsubscriptݑ£ݑ¡v_titalic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the per-frame or per-segment feature at time tݑ¡titalic_t, H(vt)Ý»subscriptݑ£ݑ¡H(v_t)italic_H ( italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) is the sliding window application of one of the feature pooling methods, and zt,csubscriptݑ§ݑ¡ݑÂz_t,citalic_z start_POSTSUBSCRIPT italic_t , italic_c end_POSTSUBSCRIPT is the ground truth class at time tݑ¡titalic_t.
A recent approach to learn ‘super-events’ (i.e., global video context) was proposed and found to be effective for activity detection in continuous videos [15]. The approach learns a set of temporal structure filters that are modeled as a set of NÝ‘ÂNitalic_N Cauchy distributions. Each distribution learns a center, xnsubscriptݑ¥ݑ›x_nitalic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and a width, γnsubscriptݛ¾ݑ›\gamma_nitalic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. Given Tݑ‡Titalic_T, the length of the video, the filters are constructed by:
xn^=(T-1)â‹…(tanh(xn)+1)2γn^=exp(1-2â‹…|tanh(γn)|)F[t,n]=1Znπγn^((t-xn^)γn^)2^subscriptݑ¥ݑ›⋅ݑ‡1subscriptݑ¥ݑ›12^subscriptݛ¾ݑ›1â‹…2subscriptݛ¾ݑ›Ý¹ݑ¡ݑ›continued-fraction1subscriptÝ‘Âݑ›ݜ‹^subscriptݛ¾ݑ›superscriptcontinued-fractionݑ¡^subscriptݑ¥ݑ›^subscriptݛ¾ݑ›2\beginsplit\hatx_n&=\frac(T-1)\cdot(\tanh\left(x_n\right)+1)2\\ \hat\gamma_n&=\exp(1-2\cdot|\tanh\left(\gamma_n\right)|)\\ F[t,n]&=\cfrac1Z_n\pi\hat\gamma_n\left(\cfrac(t-\hatx_n)\hat% \gamma_n\right)^2\endsplitstart_ROW start_CELL over^ start_ARG italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG end_CELL start_CELL = divide start_ARG ( italic_T - 1 ) â‹… ( roman_tanh ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) + 1 ) end_ARG start_ARG 2 end_ARG end_CELL end_ROW start_ROW start_CELL over^ start_ARG italic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG end_CELL start_CELL = roman_exp ( 1 - 2 â‹… | roman_tanh ( italic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) | ) end_CELL end_ROW start_ROW start_CELL italic_F [ italic_t , italic_n ] end_CELL start_CELL = continued-fraction start_ARG 1 end_ARG start_ARG italic_Z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_Ï€ over^ start_ARG italic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG ( continued-fraction start_ARG ( italic_t - over^ start_ARG italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG ) end_ARG start_ARG over^ start_ARG italic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG end_CELL end_ROW (5)
where ZnsubscriptÝ‘Âݑ›Z_nitalic_Z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is a normalization constant, t∈1,2,…,Tݑ¡12…ݑ‡t\in\1,2,\ldots,T\italic_t ∈ 1 , 2 , … , italic_T and n∈1,2,…,Nݑ›12…ݑÂn\in\1,2,\ldots,N\italic_n ∈ 1 , 2 , … , italic_N .
The filters are combined with learned per-class soft-attention weights, AÝ´Aitalic_A and the super-event representation is computed as:
Sc=∑mMAc,m⋅∑tTFm[t]â‹…vtsubscriptݑ†ݑÂsuperscriptsubscriptݑšݑ€⋅subscriptÝ´ݑÂݑšsuperscriptsubscriptݑ¡ݑ‡⋅subscriptݹݑšdelimited-[]ݑ¡subscriptݑ£ݑ¡S_c=\sum_m^MA_c,m\cdot\sum_t^TF_m[t]\cdot v_titalic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_A start_POSTSUBSCRIPT italic_c , italic_m end_POSTSUBSCRIPT â‹… ∑ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT [ italic_t ] â‹… italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT (6)
where vݑ£vitalic_v is the T×Dݑ‡Ý·T\times Ditalic_T × italic_D video representation. These filters allow the model to learn intervals to focus on for useful temporal context. The super-event representation is concatenated to each timestep and used for classification. We also try concatenating the super- and sub-event representations to use for classification to create a three-level hierarchy of event representation.
6 Experiments
6.1 Implementation Details
As our base per-segment CNN, we use the I3D [3] network pretrained on the ImageNet and Kinetics [11] datasets. I3D obtained state-of-the-art results on segmented video tasks, and this allows us to obtain reliable per-segment feature representation. We also use two-stream version of InceptionV3 [21] pretrained on Imagenet and Kinetics as our base per-frame CNN, and compared them. We chose InceptionV3 as it is deeper than previous two-stream CNNs such as [20, 7]. We extracted frames from the videos at 25 fps, computed TVL1 [29] optical flow, clipped to [-20,20]2020[-20,20][ - 20 , 20 ]. For InceptionV3, we computed features for every 3 frames (8 fps). For I3D, every frame was used as the input. I3D has a temporal stride of 8, resulting in 3 features per second (3 fps). We implemented the models in PyTorch. We trained our models using the Adam [12] optimizer with the learning rate set to 0.01. We decayed the learning rate by a factor of 0.1 after every 10 training epochs. We trained our models for 50 epochs. Our source code, dataset and trained models are available at https://github.com/piergiaj/mlb-youtube/
6.2 Segmented Video Activity Recognition
We first performed the binary pitch/non-pitch classification of each video segment. This task is relatively easy, as the difference between pitch frames and non-pitch frames are quite different. The results, shown in Table 2, do not show much difference between the various features or models.
6.2.1 Multi-label Classification
We evaluate and compare the various approaches of temporal feature aggregation by computing mean average precision (mAP) for each video clip, which is a standard evaluation metric for multi-label classification tasks. Table 4 compares the performance of the various temporal feature pooling methods. We find that all approaches outperform mean/max-pooling, confirming that maintaining some temporal structure is important for activity recognition. We find that fixed temporal pyramid pooling. LSTMs give some improvement. Temporal convolution provides a larger increase in performance, however it requires significantly more parameters (see Table 3). Learning sub-events of [14] we found to give the best performance on this task. While LSTMs and temporal have been previously used for this task, they require greater number of parameters and perform worse, likely due to overfitting. Additionally, LSTMs require the video features to be processes sequentially as each timestep requires the output from the previous timestep, while the other approaches can be completely parallelized.
In Table 5, we compare the average precision for each activity class. Learning temporal structure is especially helpful for frame-based features (e.g., InceptionV3) whose features capture minimal temporal information when compared to segment-based features (e.g., I3D) which capture some temporal information. Additionally, we find that sub-event learning helps especially in the case of strikes, hits, foul balls, and hit by pitch, as those all have changes in video features after the event. For example, after the ball is hit, the camera will often follow the ball’s trajectory, while being hit by a pitch the camera will follow the player walking to first base, as shown in Fig. 6 and Fig. 7.
6.2.2 Pitch Speed Regression
Pitch speed regression from video frames is a challenging task because it requires the network to learn localize the start of a pitch and the end of the pitch, then compute the speed from a weak signal (i.e., only pitch speed). The baseball is often small. Occluded by the pitcher. Professional baseball pitchers can throw the ball in excess of 100mph and the pitch only travels 60.5 ft. Thus the ball is only traveling for roughly 0.5 seconds. Using our initial frame rates of 8fps and 3fps, there was only 1-2 features of the pitch in the air, which we found was not enough to determine pitch speed. The YouTube videos contain 60fps, so we recomputed optical flow and extract RGB frames at 60fps. We use a fully-connected layer with one output to predict the pitch speed and minimize the L1subscriptݿ1L_1italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss between the ground truth speed and predicted speed. Using features extracted at 60fps, we were able to determine pitch speed, with 3.6mph average error. Table 6 compares various models. Fig. 8 shows the sub-event learned for various speeds.
6.2.3 Pitch Type Classification
We experiment to see if it is possible to predict the pitch type from video. This is an extremely challenging problem because it is adversarial; pitchers practice to disguise their pitch from batters. Additionally, the difference between pitches can be as small as a difference in grip on the ball and which way it rotates with respect to the laces, which is rarely visible in broadcast baseball videos. In addition to the video features used in the previous experiments, we also extract pose using OpenPose [2]. Our features are heatmaps of joint and body part locations which we stack along the channel axis and use as input to an InceptionV3 CNN which we newly train on this task. We chose to try pose features as the body mechanics can vary between pitches as well (e.g., the stride length and arm angles can be different for fastballs and curveballs). Our dataset has 6 different pitches (fastball, sinker, curveball, changeup, slider, and knuckle-curve). We report our results in Table 7. We find that LSTMs actually perform worse than the baseline, likely due to overfitting the small differences between pitch types, while learning sub-events helps. We observe that fastballs are the easiest to detect (68% accuracy) followed by sliders (45% accuracy), while sinkers are the hardest to classify (12%).
6.3 Continuous Video Activity Detection
We evaluate the extended models on continuous videos using per-frame mean average precision (mAP), and the results are shown in Table 8. This setting is more challenging that the segmented videos as the model must determine when the activity starts and ends and contains negative examples that are more ambiguous than the hard negatives in the segmented dataset (e.g., the model has to determine when the pitch event begins compared to just the pitcher standing on the mound). We find that all models improve over the baseline per-frame classification, confirming that temporal information is important for detection. We find that fixed temporal pyramid pooling outperforms max-pooling. The LSTM and temporal convolution seem to overfit, due to the larger number of parameters. We find that the convolutional form of sub-events to pool local temporal structure especially helps frame based features, not as much on segment features. Using the super-event approach [15], further improves performance. Combining the convolutional sub-event representation with the super-event representation provides the best performance.
7 Conclusion
We introduced a challenging new dataset, MLB-YouTube, for fine-grained activity recognition in videos. We experimentally compare various recognition approaches with temporal feature pooling for both segmented and continuous videos. We find that learning sub-events to select the temporal regions-of-interest provides the best performance for segmented video classification. For detection in continuous videos, we find that learning convolutional sub-events combined with the super-event representation to form a three-level activity hierarchy provides the best performance.
|