In collaboration with ViCom & DGS-Korpus project, we (1) use computer Vision (OpenPose (2)) to automatically extract measurements from 2D video of spoken and signed languages to analyse and compare the phonetic properties of head nods fulfilling different functions in natural dyadic interaction. We started by studying the head nods produced by the addressee and focus on two types of head nods: affirmative and feedback nods. Affirmation nods give a positive answer to a yes/no question. Feedback nods give positive feedback to the proposition produced the signer/speaker and signal participation and engagement in conversation.
We extract landmarks coordinates from videos of natural conversation in various languages and measure different features such as: a number of head turns, the length of the movement (duration), the maximum amplitude in the production of the movement and the velocity (speed) of the head movement. By velocity (speed) we mean the distance the nose moves up and down on average during the nod per second. We compare the head nods fulfilling two different functions and find that they differ with regard to some of these phonetic features.