TY - JOUR
T1 - Gesture Phase Segmentation Dataset
T2 - An Extension for Development of Gesture Analysis Models
AU - Sánchez-Ancajima, Raúl A.
AU - Peres, Sarajane Marques
AU - López-Céspedes, Javier A.
AU - Saly-Rosas-solano, José L.
AU - Hernández, Ronald M.
AU - Saavedra-López, Miguel A.
N1 - Publisher Copyright:
© 2022, Innovative Information Science and Technology Research Group. All rights reserved.
PY - 2022/11
Y1 - 2022/11
N2 - In recent years, experts in theory of gesture have been showing some interest in automating the discovery of gesture information. Such an automation can help them in reducing the inherent subjectivity of gesture studies. Usually, to produce information for linguistic and psycholinguistic studies, the researchers analyze a video of people speaking and gesturing. This annotation task is costly and it is the goal of automation. Such videos compose the datasets that allow the development of automated models capable to carry out part of the analysis of gestures. In this paper, we present a detailed documentation about the Gesture Phase Segmentation Dataset, publicized in UCI Machine Learning Repository, and an extension of such dataset. Such dataset is especially prepared to be used in the development of models capable to carry out the segmentation of gestures in their phases. The extended dataset is composed by nine videos of three people gesturing and telling stories. The data was captured with Microsoft Kinect Sensor and they are represented by spatial coordinates and temporal information (velocity and acceleration). The data are labeled following four phase of gesture (preparation, stroke, hold and retraction) and rest positions.
AB - In recent years, experts in theory of gesture have been showing some interest in automating the discovery of gesture information. Such an automation can help them in reducing the inherent subjectivity of gesture studies. Usually, to produce information for linguistic and psycholinguistic studies, the researchers analyze a video of people speaking and gesturing. This annotation task is costly and it is the goal of automation. Such videos compose the datasets that allow the development of automated models capable to carry out part of the analysis of gestures. In this paper, we present a detailed documentation about the Gesture Phase Segmentation Dataset, publicized in UCI Machine Learning Repository, and an extension of such dataset. Such dataset is especially prepared to be used in the development of models capable to carry out the segmentation of gestures in their phases. The extended dataset is composed by nine videos of three people gesturing and telling stories. The data was captured with Microsoft Kinect Sensor and they are represented by spatial coordinates and temporal information (velocity and acceleration). The data are labeled following four phase of gesture (preparation, stroke, hold and retraction) and rest positions.
KW - Dataset
KW - Gesture Segmentation
KW - Gesture Studies
KW - Phases of Gesture
UR - http://www.scopus.com/inward/record.url?scp=85146952366&partnerID=8YFLogxK
U2 - 10.58346/JISIS.2022.I4.010
DO - 10.58346/JISIS.2022.I4.010
M3 - Original Article
AN - SCOPUS:85146952366
SN - 2182-2069
VL - 12
SP - 139
EP - 155
JO - Journal of Internet Services and Information Security
JF - Journal of Internet Services and Information Security
IS - 4
ER -