Tag: video tracking

Oat: online animal tracker

December 5, 2019

Jonathan Newman of the Wilson Lab at Massachusetts Institute of Technology has developed and shared a set of programs for processing video.


The only thing you’ll enjoy more than an oat milk latte from your favorite coffeeshop is the Oat collection of video processing components designed for use with Linux! Developed by Jonathan Newman of Open Ephys, this set of programs is useful for processing video, extracting object position information, and streaming data. While it was designed for use with real-time animal position tracking, it can be used in multiple applications that require real-time object tracking. The individual Oat components each have a standard interface that can be chained together to create complex dataflow networks for capturing  processing, and recording video streams.

Read more about Oat on the Open Ephys website, or check out more on Github!


https://open-ephys.org/oat

DeepBehavior

June 20, 2019

Ahmet Arac from Peyman Golshani’s lab at UCLA recently developed DeepBehavior, a deep-learning toolbox with post processing methods for video analysis of behavior:


Recently, there has been a major push for more fine-grained and detailed behavioral analysis in the field of neuroscience. While there are methods for taking high-speed quality video to track behavior, the data still needs to be processed and analyzed. DeepBehavior is a deep learning toolbox that automates this process, as its main purpose is to analyze and track behavior in rodents and humans.

The authors provide three different convolutional neural network models (TensorBox, YOLOv3, and OpenPose) which were chosen for their ease of use, and the user can decide which model to implement based on what experiment or what kind of data they aim to collect and analyze. The article provides methods and tips on how to train neural networks with this type of data, and gives methods for post-processing of image data.

In the manuscript, the authors give examples of utilizing DeepBehavior in five behavioral tasks in both animals and humans. For rodents, they use a food pellet reaching task, a three-chamber test, and social interaction of two mice. In humans, they use a reaching task and a supination / pronation task. They provide 3D kinematic analysis in all tasks, and show that the transfer learning approach accelerates network training when images from the behavior videos are used. A major benefit of this tool is that it can be modified and generalized across behaviors, tasks, and species. Additionally, DeepBehavior uses several different neural network architectures, and uniquely provides post-processing methods for 3D kinematic analysis, which separates it from previously published toolboxes for video behavioral analysis. Finally, the authors emphasize the potential for using this toolbox in a clinical setting with analyzing human motor function.

 

For more details, take a look at their project’s Github.

All three models used in the paper also have their own Github: TensorBox, YOLOv3, and openpose.


Arac, A., Zhao, P., Dobkin, B. H., Carmichael, S. T., & Golshani, P. (2019). DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Frontiers in systems neuroscience, 13.

 

ezTrack

June 13, 2019

Zach Pennington from Denise Cai’s lab at Mt. Sinai recently posted a preprint describing their latest open-source project called ezTrack:


ezTrack is an open-source, platform independent set of behavior analysis pipelines using interactive Python (iPython/Jupyter Notebook) that researchers with no prior programming experience can use. ezTrack is a sigh of relief for researchers with little to no computer programming experience. Behavioral tracking analysis shouldn’t be limited to those with extensive programming knowledge, and ezTrack is a nice alternative to currently available software that may require a bit more programming experience. The manuscript and Jupyter notebooks are written in the style of a tutorial, and is meant to provide straightforward instructions to the user on implementing ezTrack. ezTrack is unique from other recent video analysis toolboxes in that this method does not use deep learning algorithms and thus does not require training sets for transfer learning.

ezTrack can be used to analyze rodent behavior videos of a single animal in different settings, and the authors provide examples of positional analysis across several tasks (place-preference, water-maze, open-field, elevated plus maze, light-dark boxes, etc), as well as analysis of freezing behavior. ezTrack can provide frame-by-frame data output in .csv files, and users can crop the frames of the video to get rid of any issue with cables from optogenetic or electrophysiology experiments. ezTrack can take on multiple different video formats, such as mpg1, wav, avi, and more.

Aside from the benefit of being open-source, there are several major advantages of ezTrack. Notably, the tool is user-friendly in that it is accessible to researchers with little to no programming background. The user does not need to make many adjustments to parameters of the toolbox, and the data can processed into interactive visualizations and is easily extractable in .csv files. ezTrack is both operating system and hardware independent and can be used across multiple platforms. Utilizing ipython/Jupyter Notebook allows researchers to easily replicate their analyses as well.

Check out their GitHub with more details on how to use ezTrack: https://github.com/denisecailab/ezTrack


Pennington, Z. T., Dong, Z., Bowler, R., Feng, Y., Vetere, L. M., Shuman, T., & Cai, D. J. (2019). ezTrack: An open-source video analysis pipeline for the investigation of animal behavior. BioRxiv, 592592. 

Low Cost Open Source Eye Tracking

May 30, 2019

On Hackaday, John Evans and colleagues have shared a design and build for an open-source eye-tracking system for human research.


We’ve wanted to expand our coverage of behavioral tools to include those used in human research. To get this rolling, we’d like to highlight a project for eye tracking that might be helpful to many labs, especially if you don’t have a grant to collect pilot data. Check out Low Cost Open Source Eye Tracking. It uses open-source code, available from GitHub, and a pair of cheap USB cameras.

Check out the details on Hackaday.io and GitHub!


Evans, J. (2018). Low Cost Open Source Eye Tracking. Retrieved from https://hackaday.io/project/153293-low-cost-open-source-eye-tracking

Automated classification of self-grooming in mice

May 16, 2019

In the Journal of Neuroscience Methods, Bastijn van den Boom and colleagues have shared their ‘how-to’ instructions for implementing behavioral classification with JAABA, featuring bonsai and motr!


In honor of our 100th post on OpenBehavior, we wanted to feature a project that exemplifies how multiple open-source projects can be implemented to address a common theme in behavioral neuroscience: tracking and classifying complex behaviors! The protocol from Van den Boom et al.  implements JAABA, an open-source machine learning based behavior detection system; motr, an open-source mouse trajectory tracking software; and bonsai, an open-source system capable of streaming and recording video. Together they use these tools to process videos of mice performing grooming behaviors in a variety of behavioral setups.

They then compare multiple tools for analyzing grooming behavior sequences in both wild-type and genetic knockout mice with a tendency to over groom. The JAABA trained classifier outperforms the commercially available behavior analysis software and more closely aligns with manual analysis of behavior by expert observers. This offers a novel, cost-effective and easy to use method for assessing grooming behavior in mice comparable to that of an expert observer, with the efficient advantage of being automatic. How to instructions for how to train your own JAABA classifier can be found in their paper!

Read more in their publication here!


idtracker.ai

February 20, 2019

Francisco Romero Ferrero and colleagues have developed idtracker.ai, an algorithm and software for tracking individuals in large collectives of unmarked animals, recently described in Nature Methods.


Tracking individual animals in large collective groups can give interesting insights to behavior, but has proven to be a challenge for analysis. With advances in artificial intelligence and tracking software, it has become increasingly easier to collect such information from video data. Ferrero et al. have developed an algorithm and tracking software that features two deep networks. The first tracks animal identification and the second tracks when animals touch or cross paths in front of one another. The software has been validated to track individuals with high accuracy in cohorts of up to 100 animals with diverse species from rodents to zebrafish to ants. This software is free, fully-documented and available online with additional jupyter notebooks for data analysis.

Check out their website with full documentation, the recent Nature Methods article, BioRXiv preprint, and a great video of idtracker.ai tracking 100 zebrafish!


Open-source platform for worm behavior

February 13, 2019

In Nature Methods, Avelino Javer and colleagues developed and shared an open-source platform for analyzing and sharing worm behavioral data.


Collecting behavioral data is important and analyzing this data is just as crucial. Sharing this data is also important because it can further our understanding of behavior and increase replicability of worm behavioral studies. This is achieved by allowing many scientists to re-analyze available data, as well as develop new methods for analysis. Javer and colleagues developed an open resource in an effort to streamline the steps involved in this process — from storing and accessing video files to creating software to read and analyze the data. This platform features: an open-access repository for storing, accessing, and filtering data; an interchange format for notating single or multi-worm behavior; and file formats written in Python for feature extraction, review, and analysis. Together, these tools serve as an accessible suite for quantitative behavior analysis that can be used by experimentalists and computational scientists alike.

 

Read more about this platform from Nature Methods! (the preprint is also available from bioRxiv!)


KineMouse Wheel

October 10, 2018

On Hackaday, Richard Warren of the Sawtell Lab at Columbia University has shared his design for KineMouse Wheel, a light-weight running wheel for head-fixed locomotion that allows for 3D positioning of mice with a single camera.


Locomotive behavior is a common behavioral readout used in neuroscience research, and running wheels are a great tool for assessing motor function in head-fixed mice. KineMouse Wheel takes this tool a step further. Constructed out of light-weight, transparent polycarbonate with an angled mirror mounted inside, this innovative device allows for a single camera to capture two views of locomotion simultaneously. When combined with DeepLabCut, a deep-learning tracking software, head-fixed mice locomotion can be captured in three dimensions allowing for a more complete assessment of motor behavior. This wheel can also be further customized to fit the needs of a lab by using different materials for the build. More details about the KineMouse Wheel are available at hackaday.io, in addition to a full list of parts and build instructions.

Read more about KineMouse Wheel on Hackaday,

and check out other awesome open-source tools on the OpenBehavior Hackaday list!


 

Q&A with Dr. Mackenzie Mathis on her experience with developing DeepLabCut

August 22, 2018

Dr. Mackenzie Mathis, Principal Investigator of the Adaptive Motor Control Lab (Rowland Institute at Harvard University), has shared the following responses to a short Q&A about the inspiration behind, development of and sharing of DeepLabCut — a toolbox for animal tracking using deep-learning.


What inspired you and your colleagues to create this toolbox as opposed to using previously developed commercial software?

Alexander Mathis and I both worked on behaviors where we wanted to track particular features, and they proved to be unreliably tracked with the methods we tried. Specifically, Alexander has an odor-guided navigation task that he works on in the lab of Prof. Venkatesh Murthy at Harvard, where the mice are placed in a very large “endless” paper trail and he inkjet prints odors for them to follow to get rewards (chocolate milk). The position of the snout is very important to measure accurately, so background subtraction or other heuristics didn’t work when the nose crossed the trail and when the droplet was right in front of the snout. I worked on a skilled joystick behavior for mice, and I wanted to track joints accurately and non-invasively – a challenging problem for little hands. So, we teamed up with Prof. Matthias Bethge at the University of Tuebingen, to work on a new approach. He suggested we start looking into the rapidly advancing human pose estimation literature, and we looked at several before deciding to seriously benchmark DeeperCut, a top performing algorithm in the large MPII dataset. Those authors did something very clever, namely, they used a deep neural network (ResNet) that was pre-trained on a large image set called ImageNet. This gives the ResNet a chance to learn natural scene statistics first. Remarkably, we found that we could use only a few frames to very accurately track the snout in the odor-guided navigation task, so we next tried videos from my joystick task, and to flex DeepLabCut’s muscles, we teamed up with Kevin Cury (who, like myself was an alumni of Prof. Nao Uchida’s group) to track fruit flies in the 3D chamber. After all this benchmarking, we built a toolbox that implements a complete pipeline to extract and label frames, train and evaluate the deep neural nets, as well as analyze new experimental videos.  We call this toolbox DeepLabCut, as a nod to DeeperCut.

What was the motivation for immediately sharing your work as an open source tool, thus making it accessible to the broader neuroscience community?

Some of the options we first tried to track with were very expensive commercial systems, and they failed quite badly. On the other hand, deep learning has revolutionized computer vision in the last few years, so we were eager to try some new approaches to solve the problem. So, in addition to being advocates of open science, we really wanted to make a toolbox that someone with minimal to no coding experience could, absolutely for free, track whatever they wanted.

We also know peer review can be slow, so as soon as we had the toolbox in place, we wrote up the arxiv paper and released the code base immediately. Honestly, it has been one of my most rewarding papers – the feedback from our peers, and seeing what people have used the code for, has been a very rewarding experience. This was my first preprint, and especially for methods manuscripts, I now cannot imagine another way to share our future work too.

How do you think open source tools, such as yours, will continue to impact the progress of scientific research?

Open source code and preprints have been the norm in some fields for decades (such as math and physics), and I am really excited to see it come of age in biology and neuroscience. I am excited to see how tools will continue to improve as the community gets behind them, just as we could build on DeeperCut, which was open source. Also, at least in my experience, many individuals write their own code, which leads to a lot of duplicated efforts. Moreover, datasets are becoming increasingly more complicated and code to work with such data need to be robust shared. My expectation is that open source code will become the norm in the future, which can only help science become more robust.

Even before formal publication this week (see Nature Neuroscience), we estimate that about 100 labs are actively using DeepLabCut, so releasing the code before publication, we hope,  has really allowed for rapid progress to be made. We were also very happy that The Atlantic could highlight some of the early adopters, as it’s one thing to say you made something, but it’s another to hear others saying it is actually ‘something.’


DeepLabCut provides an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. Read more on the website, or in Nature Neuroscience.

 


Open source modules for tracking animal behavior and closed-loop stimulation based on Open Ephys and Bonsai

June 15, 2018

In a recent preprint on BioRxiv, Alessio Buccino and colleagues from the University of Oslo provide a step-by-step guide for setting up an open source, low cost, and adaptable system for combined behavioral tracking, electrophysiology, and closed-loop stimulation. Their setup integrates Bonsai and Open Ephys with multiple modules they have developed for robust real-time tracking and behavior-based closed-loop stimulation. In the preprint, they describe using the system to record place cell activity in the hippocampus and medial entorhinal cortex, and present a case where they used the system for closed-loop optogenetic stimulation of grid cells in the entorhinal cortex as examples of what the system is capable of. Expanding the Open Ephys system to include animal tracking and behavior-based closed-loop stimulation extends the availability of high-quality, low-cost experimental setup within standardized data formats.

Read more on BioRxiv, or on GitHub!


Buccino A, Lepperød M, Dragly S, Häfliger P, Fyhn M, Hafting T (2018). Open Source Modules for Tracking Animal Behavior and Closed-loop Stimulation Based on Open Ephys and Bonsai. BioRxiv. http://dx.doi.org/10.1101/340141