FaceSync
Dartmouth College doctoral candidate Jin Hyun Cheong and colleagues developed an open-source device and Python toolbox for recording and synchronizing facial expressions in response to videos or events.
Facial expressions are an invaluable measure of what emotions an individual is feeling. By understanding emotions and the facial expressions they are connected to, researchers are able to gain more insight into human behavior. There are also clinical applications for this information such as pain detection and symptom intensity evaluation for multiple neurodegenerative disorders. Collecting facial expression data via manual coding can be tedious, time-consuming, and require multiple qualified coders. Automated facial expression analysis is a substitute for this and has made facial expression research much easier and more popular. While this breakthrough alleviated many of the difficulties with facial expression data extraction, in a laboratory setting there is still an issue with obtaining high-quality recordings of the subjects’ face despite using webcams and other newer video recording technology. Many times the videos recorded during these studies do not have a clear view of the face for the entirety of the experiment or do not have a high enough spatial or temporal resolution for data collection and/or analysis. Subjects can turn their head or fidget during the experiment making facial expression analysis almost impossible.
To solve this problem, Cheong et al. designed an open source guide to building an affordable head mount for a camera to maintain high temporal and spatial resolution of the face throughout the duration of an experiment. The device is paired with FaceSync, a Python toolbox found on GitHub, which automatically synchronizes recorded facial expressions to videos and events such as social interactions. This allows researchers to obtain a clear, head on view of the subject’s face regardless of movement throughout the experiment and then sync the facial expressions to onsets of stimuli. By acquiring this unobstructed view of the face, researchers can learn more about emotion related behavior than ever before. Cheong et al. supported this claim by showing the device in action. They conducted four studies testing the device’s capabilities in various settings. The first proved that when a subject rotates their head, their head-mounted device is still able to capture a direct recording of the face. The following three studies consisted of the device being worn by subjects being shown images or videos in both individual and group settings. In all studies, Cheong et al. demonstrated the usefulness of their device in that it was able to accurately capture the facial expressions of the subjects regardless of motion, experimental setting, or stimuli.
This research tool was created by your colleagues. Please acknowledge the Principal Investigator, cite the article in which the tool was described, and include an RRID in the Materials and Methods of your future publications. Project portal RRID:SCR_021393; Software RRID:SCR_021396
Read the Paper
Learn more about this project, the hardware components, and software development from F1000!
FaceSync GitHub Repository
Get access to the software and related tutorials from the FaceSync GitHub Repository!
Thanks, Prairie!
This post was brought to you by Prairie Fiebel. This project summary is a part of the collection from neuroscience undergraduate students in the Computational Methods course at American University.
Check out projects similar to this!