In recent years, experts in theory of gesture have been showing some interest in automating the discovery of gesture information. Such an automation can help them in reducing the inherent subjectivity of gesture studies. Usually, to produce information for linguistic and psycholinguistic studies, the researchers analyze a video of people speaking and gesturing. This annotation task is costly and it is the goal of automation. Such videos compose the datasets that allow the development of automated models capable to carry out part of the analysis of gestures. In this paper, we present a detailed documentation about the Gesture Phase Segmentation Dataset, publicized in UCI Machine Learning Repository, and an extension of such dataset. Such dataset is especially prepared to be used in the development of models capable to carry out the segmentation of gestures in their phases. The extended dataset is composed by nine videos of three people gesturing and telling stories. The data was captured with Microsoft Kinect Sensor and they are represented by spatial coordinates and temporal information (velocity and acceleration). The data are labeled following four phase of gesture (preparation, stroke, hold and retraction) and rest positions.
|Idioma original||Inglés estadounidense|
|Publicación||Journal of Internet Services and Information Security|
|Estado||Indizado - nov. 2022|
Nota bibliográficaPublisher Copyright:
© 2022, Innovative Information Science and Technology Research Group. All rights reserved.