Home » video analysis

Tag: video analysis

DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila

July 23, 2020

Semih Günel and colleagues have created a deep learning-based pose estimator for studying how neural circuits control limbed behaviors in tethered Drosophila.

Appendage tracking is an important behavioral measure in motor circuit research. Up until now, algorithms for accurate 3D pose estimation in small animals such as Drosophila did not exist. Rather, researchers have had to use alternative approaches such as placing small reflective biomarkers on fly leg segments. While this method is appropriate for larger animals, implementing this strategy in drosophila-sized animals is motion limiting, labor intensive, and cannot estimate 3D information, therefore limiting the accuracy of behavioral measures. DeepFly3D is a PyTorch and PyQT5 based software designed to solve these issues and provide a user-friendly user interface for pose estimation and appendage tracking. DeepFly3D makes of use of supervised deep learning for 2D joint detection and a multicamera setup to iteratively infer 3D poses. This new approach allows for sub-millimeter scale accuracy of automated measurements. Amazingly, DeepFly3D is not limited to drosophila and can be modified to study other animals, such as rodents, primates, and humans. DeepFly3D therefore allows for versatile pose estimation while also permitting an extraordinary level of behavioral detail and accuracy.

Read more in the paper!

Or check out the project’s GitHub!

Günel, S., Rhodin, H., Morales, D., Campagnolo, J., Ramdya, P., & Fua, P. (2019). Deepfly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. ELife, 8. https://doi.org/10.7554/eLife.48571


April 30, 2020

Jeffrey P. Gill and colleagues have developed and shared a new toolbox for synchronizing video and neural signals, cleverly named neurotic!

Collecting neural data and behavioral data are fundamental to behavioral neuroscience, and the ability to synchronize these data streams are just as important as collecting the information in the first place. To make this process a little simpler, Gill et al. developed an open-source option called neurotic, a NEUROscience Tool for Interactive Characterization. This tool is programmed in Python and includes a simple GUI, which makes it accessible for users with little coding experience. Users can read in a variety of file formats for neural data and video, which they can then process, filter, analyze, annotate and plot. To show the effectiveness across species and signal types, the authors tested the software with aplysia feeding behavior and human beam walking. Given its open-source nature and strong integration of other popular open-source packages, this software will continue to develop and improve as the community uses it.

Read more about neurotic here!

Check out the documentation here.

Gill, J. P., Garcia, S., Ting, L. H., Wu, M., & Chiel, H. J. (2020). Neurotic: Neuroscience Tool for Interactive Characterization. Eneuro. doi:10.1523/eneuro.0085-20.2020

Open Source Whisking Video Database

April 2, 2020

Alireza Azafar and colleagues at the Donders Institute for Brain, Cognition, and Behaviour at Rabound University have published an open source database of high speed videos of whisking behavior in freely moving rodents.

As responsible citizens, it’s possible you are missing lab and working from home. Maybe you have plenty to do, or maybe you’re looking for some new data to analyze to increase your knowledge of active sensing in rodents! Well, in case you don’t have that data at hand and can’t collect it yourself for a few more weeks, we have a treat for you! Azafar et al. have shared a database of 6,642 high quality videos featuring juvenile and adult male rats and adult male mice exploring stationary objects. This dataset includes a wide variety of experimental conditions including genetic, pharmacological, and sensory deprivation interventions to explore how active sensing behavior is modulated by different factors. Information about interventions and experimental conditions are available as a supplementary Excel file.

The videos are available as mp4 files as well as MATLAB matrices that can be converted into a variety of data formats. All videos underwent quality control and feature different amounts of noise and background, which makes a great tool for mastering video analysis. A toolbox is available from this group on Github for a variety of whisker analysis methods including nose and whisker tracking. This database is a great resource for studying sensorimotor computation, top-down mechanisms of sensory navigation, cross-species comparison of active sensing, and effects of pharmacological intervention of whisking behaviors.

Read more about active sensing behavior, data collection methods, and rationale here.

The database is available through GigaScience DB. 

You can also try out their Matlab Whisker Tracker from Github!

Camera Control

February 6, 2020

The Adaptive Motor Control Lab at Harvard recently posted their project, Camera Control, a python based camera software GUI, to Github.

Camera Control is an open-source software package written by postdoctoral fellow Gary Kane that allows video to be recorded in sync with behavior. The python GUI and scripts allows investigators to record from multiple imaging source camera feeds with associated timestamps for each frame. When used in combination with a NIDAQ card, timestamps from a behavioral task can also be recorded on the falling edge of a TTL signal. This allows video analysis to be paired with physiological recording which can be beneficial in assessing behavioral results. This package requires Windows 10, Anaconda, and Git, and is compatible with Imaging Source USB3 cameras. The software package is accessible for download from the lab’s github and instructions for installation and video recording are provided.

Find more on Github.

Kane, G. & Mathis, M. (2019). Camera Control: record video and system timestamps from Imaging Source USB3 cameras. GitHub. https://zenodo.org/badge/latestdoi/200101590

Oat: online animal tracker

December 5, 2019

Jonathan Newman of the Wilson Lab at Massachusetts Institute of Technology has developed and shared a set of programs for processing video.

The only thing you’ll enjoy more than an oat milk latte from your favorite coffeeshop is the Oat collection of video processing components designed for use with Linux! Developed by Jonathan Newman of Open Ephys, this set of programs is useful for processing video, extracting object position information, and streaming data. While it was designed for use with real-time animal position tracking, it can be used in multiple applications that require real-time object tracking. The individual Oat components each have a standard interface that can be chained together to create complex dataflow networks for capturing  processing, and recording video streams.

Read more about Oat on the Open Ephys website, or check out more on Github!



July 18, 2019

In a 2015 Scientific Reports article, Andre Samson and colleagues shared their project MouseMove, an open-source software for quantifying movement in the open field test:

The Open Field (OF) test is a commonly used assay for monitoring exploratory behavior and locomotion in rodents. Most research groups use commercial systems for recording and analyzing behavior in the OF test, but these commercial systems can be expensive and lack flexibility. A few open-source OF systems have been developed, but are limited in the movement parameters that can be collected and analyzed. MouseMove is the first open-source software capable of providing qualitative and quantitative information on mouse locomotion in a semi-automated and high-throughput approach. With the aim of providing a freely available program for analyzing OF test data, these researchers developed a software that accurately quantifies numerous parameters of movement.

In their manuscript, Samson et al. describe the design and implementation of MouseMove. Their OF system allows for the measurement of distance, speed, and laterality with >96% accuracy. They use MouseMove as a method to analyze OF behavior of mice after experimental stroke to show reduced locomotor activity and quantify laterality deficits. The system is used in combination with the open source program ImageJ and the MTrack2 plugin to analyze pre-recorded OF test video.

The system has two downloadable components, the ImageJ macro and a separate program with the custom-built MouseMove GUI. ImageJ is used to subtract the background video from the experiment and create an image of the animals total trajectory. The MouseMove GUI then completes a detailed analysis of the movement patterns, measuring the fractional time spent stationary, the distance traveled, speed mean and various details of laterality. The results are depicted in both a visual/graphical form and as a saveable text file. In the manuscript, they provide step-wise instructions of how to use Mousemove. The authors additionally highlight the defined region-of-interest (ROI) ability of the software that makes it suitable for analysis of cognitive tests such as Novel Object Recognition. This tool offers relatively fast video-processing of motor cognitive behaviors and has many applications for the study of rodent models of brain injury/stimulation to measure altered locomotion.


More information on MouseMove can be found in their manuscript here.

Samson, A. L., Ju, L., Ah Kim, H., Zhang, S. R., Lee, J. A. A., Sturgeon, S. A., … Schoenwaelder, S. M. (2015). MouseMove: an open source program for semi-automated analysis of movement and cognitive testing in rodents. Scientific Reports, 5, 16171.  doi: 10.1038/srep16171


June 20, 2019

Ahmet Arac from Peyman Golshani’s lab at UCLA recently developed DeepBehavior, a deep-learning toolbox with post processing methods for video analysis of behavior:

Recently, there has been a major push for more fine-grained and detailed behavioral analysis in the field of neuroscience. While there are methods for taking high-speed quality video to track behavior, the data still needs to be processed and analyzed. DeepBehavior is a deep learning toolbox that automates this process, as its main purpose is to analyze and track behavior in rodents and humans.

The authors provide three different convolutional neural network models (TensorBox, YOLOv3, and OpenPose) which were chosen for their ease of use, and the user can decide which model to implement based on what experiment or what kind of data they aim to collect and analyze. The article provides methods and tips on how to train neural networks with this type of data, and gives methods for post-processing of image data.

In the manuscript, the authors give examples of utilizing DeepBehavior in five behavioral tasks in both animals and humans. For rodents, they use a food pellet reaching task, a three-chamber test, and social interaction of two mice. In humans, they use a reaching task and a supination / pronation task. They provide 3D kinematic analysis in all tasks, and show that the transfer learning approach accelerates network training when images from the behavior videos are used. A major benefit of this tool is that it can be modified and generalized across behaviors, tasks, and species. Additionally, DeepBehavior uses several different neural network architectures, and uniquely provides post-processing methods for 3D kinematic analysis, which separates it from previously published toolboxes for video behavioral analysis. Finally, the authors emphasize the potential for using this toolbox in a clinical setting with analyzing human motor function.


For more details, take a look at their project’s Github.

All three models used in the paper also have their own Github: TensorBox, YOLOv3, and openpose.

Arac, A., Zhao, P., Dobkin, B. H., Carmichael, S. T., & Golshani, P. (2019). DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Frontiers in systems neuroscience, 13.



June 13, 2019

Zach Pennington from Denise Cai’s lab at Mt. Sinai recently published in Scientific Reports describing their latest open-source project called ezTrack:

ezTrack is an open-source, platform independent set of behavior analysis pipelines using interactive Python (iPython/Jupyter Notebook) that researchers with no prior programming experience can use. ezTrack is a sigh of relief for researchers with little to no computer programming experience. Behavioral tracking analysis shouldn’t be limited to those with extensive programming knowledge, and ezTrack is a nice alternative to currently available software that may require a bit more programming experience. The manuscript and Jupyter notebooks are written in the style of a tutorial, and is meant to provide straightforward instructions to the user on implementing ezTrack. ezTrack is unique from other recent video analysis toolboxes in that this method does not use deep learning algorithms and thus does not require training sets for transfer learning.

ezTrack can be used to analyze rodent behavior videos of a single animal in different settings, and the authors provide examples of positional analysis across several tasks (place-preference, water-maze, open-field, elevated plus maze, light-dark boxes, etc), as well as analysis of freezing behavior. ezTrack can provide frame-by-frame data output in .csv files, and users can crop the frames of the video to get rid of any issue with cables from optogenetic or electrophysiology experiments. ezTrack can take on multiple different video formats, such as mpg1, wav, avi, and more.

Aside from the benefit of being open-source, there are several major advantages of ezTrack. Notably, the tool is user-friendly in that it is accessible to researchers with little to no programming background. The user does not need to make many adjustments to parameters of the toolbox, and the data can processed into interactive visualizations and is easily extractable in .csv files. ezTrack is both operating system and hardware independent and can be used across multiple platforms. Utilizing ipython/Jupyter Notebook allows researchers to easily replicate their analyses as well.

Check out their GitHub with more details on how to use ezTrack: https://github.com/denisecailab/ezTrack

Pennington, Z. T., Dong, Z., Bowler, R., Feng, Y., Vetere, L. M., Shuman, T., & Cai, D. J. (2019). ezTrack: An open-source video analysis pipeline for the investigation of animal behavior. Sci. Reports.


February 6, 2019

Arne Meyer and colleagues recently shared their design and implementation of a head-mounted camera system for capturing detailed behavior in freely moving mice.

Video monitoring of animals can give great insight to behaviors. Most video monitoring systems to collect precise behavioral data require fixed position cameras and stationary animals, which can limit observation of natural behaviors. To address this, Meyer et al. developed a system which combines a lightweight head-mounted camera and head-movement sensors to detect behaviors in mice. The system, built using commercially available and 3D printed parts, can be used to monitor a variety of subtle behaviors including eye position, whisking, and ear movements in unrestrained animals. Furthermore, this device can be mounted in combination with neural implants for recording brain activity.

Read more here! You can also check out their github here. Documentation and files are also available on OpenEphys here.