Last fall when teaching an undergraduate course on computational methods in neuroscience at American University, we wanted to bring in some of the tools for video analysis that have been promoted on OpenBehavior. The idea was to introduce these tools at the end of the course, after the students had learned a bit about Python, Anaconda, Jupyter, Arduinos, etc. We decided on using ezTrack from the Cai Lab as it is written in Python and uses Jupyter notebooks. It was easy to prepare for this topic until we realized that we needed simple videos for tracking. Those from our lab are from operant chambers illuminated with infrared LEDs and require a good bit of preprocessing to be suited for analysis with simple tracking algorithms. In addition, we use Long-Evans rats in our studies and they are a challenge to track given their coloring. So, we looked around the web for example videos and were surprised by the lack of sharing example videos by labs who have developed, published with, and promoted tools for video analysis. Most videos that we found show the results of tracking and did not provide raw video data. We did find a nice example of open-field behavior by mice (Samson et al., 2015), and used the supplemental videos from this now 5 year old paper for the course.

These experiences made us wonder if having a collection of videos for teaching and training would be useful to the community. A collection of video recordings of animals engaged in standard neuroscience behavioral tasks (e.g. feeding, foraging, fear conditioning, operant learning) would be useful for educational purposes, e.g. students could read published papers to understand the experimental design and then analyze data from the published studies using modifications of available tutorial code for packages such as ezTrack or others. For researchers, these same videos would be useful for reproducing analyses from published studies, and quickly learning how to use published code to analyze their own data. Furthermore, with the development of tools that use advanced statistical methods for video analysis (e.g. DeepLabCut, B-SOiD), it seems warranted to have a repository available that could be used to benchmark algorithms and explore their parameter space. One could even envision an analysis competition using standard benchmark videos similar to what is available in the field of machine learning, and that have had impact on the development of powerful algorithms that go well beyond the performance of those that were available only a decade ago (e.g. xgboost).

So we are posting today to ask for community participation in the creation of a video repository. The plan is to post license-free videos to the OpenBehavior Google Drive account. Our OpenBehavior team will convert the files to a standard (mp4) format and post links to the videos on the OpenBehavior website, so they will be accessible to the community. The website will list the creator of the video file, the camera and software used for the recording, the resolution, frame rate and duration of recording, the species and information on the behavioral experiment (and a link to the publication or preprint if the work is from a manuscript).

For studies in rodents, we are especially interested in videos showing overhead views from open-field and operant arena experiments and close-up videos of facial reactions, eyeblinks, oral movements and limb reaching. We are happy to curate videos from other species (fish, birds, monkeys, people) as well.

If you are interested in participating, please complete the form on this page or reach out to us via email at openbehavior@gmail.com or Twitter at @OpenBehavior.