Category: All

BPM Biosignals

January 28, 2017

Dr. Bernd Porr has shared his open-source webpage, BPM Biosignals, with Open Behavior in hopes of providing an educational resource regarding the construction and implementation of bioamplifiers as well as interpretation of biosignals.

“BPM biosignals provides a no-nonsense approach to assembling a two stage bio-amplifier and then using it to measure ECG, EEG, EMG and other biosignals. The page features the list of components, the circuit diagram and video tutorials how to assemble the amplifier and how to use it properly. The main aim of this page is biosignal education, in particular how to to distinguish noise from actual biosignals. One will learn that often noise is “sold” as a biosignal, even in top level publications. With the help of these clips one can critically evaluate such results and make up their own mind.”


January 13, 2017

Tom Baden, from the University of Sussex, has generously shared the following device with Open Behavior:

Designed for ease of use, robustness and low-cost, the “Openspritzer” is an open hardware “Picospritzer” as routinely used in labs around the world for administering picoliters of liquid to biological samples. The performance of Openspritzer and commercial alternatives is effectively indistinguishable.
The system is based on a solenoid valve connected to a pressure gauge. Control can be attained directly via an external TTL pulse or internally through an Arduino set by a rotary encoder. The basic setup can be put together for 3-400€, or substantially less if you are prepare to shop around.

More information regarding Openspritzer can be found on Open Labware.

Supplementary Materials

3D Files and Arduino Code 


Janelia Automatic Animal Behavior Annotator (JAABA)

December 30, 2016

Mayank Kabra has shared the following about JAABA, a machine learning-based behavior detection system developed by the Branson Lab at HHMI Janelia Farm.

The Janelia Automatic Animal Behavior Annotator (JAABA) is a machine learning-based system that enables researchers to automatically detect behaviors in video of behaving animals. Through our system, users begin by labeling the behavior of the animal, e.g. walking, grooming, or following, in few of the video frames. Using machine learning, JAABA learns behavior detectors based on these labels that can then be used to automatically classify the behaviors of animals in other videos. JAABA has a fast and intuitive graphical user interface, and multiple ways to visualize and understand the classifier by a non-machine learning user. JAABA has enabled extraction of detailed, scientifically meaningful measurements of the behavioral effects in large experiments.

Ultrasonic Vocalization (USV) Detector

December 21, 2016

David Barker from the National Institute on Drug Abuse Intramural Research Program has shared the following regarding the development of a device designed to allow the automatic detection of 50kHz ultrasonic vocalizations.

Ultrasonic vocalizations (USVs) have been utilized to infer animals’ affective states in multiple research paradigms including animal models of drug abuse, depression, fear or anxiety disorders, Parkinson’s disease, and in studying neural substrates of reward processing. Currently, the analysis of USV data is performed manually, and thus time consuming.

The present method was developed in order to allow for the automated detection of 50-kHz ultrasonic vocalizations using a template detection procedure. The detector runs in XBAT, an extension developed for MATLAB developed by the Bioacoustics Research Program at Cornell University. The specific template detection procedure for ultrasonic vocalizations along with a number of companion tools were developed and tested by our laboratory. Details related to the detector’s performance can be found within our published work and a detailed readme file is published along with the MATLAB package on our GitHub.

Our detector was designed to be freely shared with the USV research community with the hope that all members of the community might benefit from its use. We have included instructions for getting started with XBAT, running the detector, and developing new analysis tools. We encourage users that are familiar with MATLAB to develop and share new analysis tools. To facilitate this type of collaboration, all files have been shared as part of a GitHub repository, allowing for suggested changes or novel contribution to be made to the software package. I would happily integrate novel analysis tools created by others into future releases of the detector.

Work on a detector for 22-kHz vocalizations is ongoing; the technical challenges for detecting 22-kHz vocalizations, which are nearer to audible noise, are more difficult. Those interested in contributing to this can email me at djamesbarker@gmail-dot-com or find me on twitter (@DavidBarker_PhD).

Behavioral Observation Research Interactive Software (BORIS)

Olivier Friard of The University of Turin has generously shared the following about BORIS:

BORIS is an open-source, multiplatform standalone program that allows a user-specific coding environment for a computer-based review of previously recorded videos or live observations. The program allows defining a highly-customised, project-based ethogram that can then be shared with collaborators or can be imported or modified. Once the coding process is completed, the program can extract a time-budget or single or grouped observations automatically and present an at-a-glance summary of the main behavioural features. The observation data and analyses can be exported in various formats (e.g.; CSV or XLS). The behavioural events can also be plotted and exported in various graphic formats (e.g.; PNG, EPS, and PDF).

BORIS is currently used around the world for strikingly different approaches to the study of animal and human behaviour.

The latest version of BORIS can be downloaded from, where the manual and some tutorials are also available.

observation_running  ethogram

Friard, O. and Gamba, M. (2016), BORIS: a free, versatile open-source
event-logging software for video/audio coding and live observations.
Methods Ecol Evol, 7: 1325–1330. doi:10.1111/2041-210X.12584


Gonçalo Lopez of the Champalimaud Neuroscience Programme has shared the following about Bonsai:

Bonsai is an open-source visual programming language for composing reactive and asynchronous data streams coming from video cameras, microphones, electrophysiology systems or data acquisition boards. It gives the experimenter an easy way not only to collect data from all these devices simultaneously, but also to quickly and flexibly chain together real-time processing pipelines that can be fed back to actuator systems for closed-loop stimulation using sound, images or other kinds of digital and analog output.
Bonsai is being used around the world to design all kinds of behavioral experiments from eye-tracking to virtual reality, spanning multiple model organisms from fish to humans.

The latest version of Bonsai can be downloaded from BitBucket, along with instructions for quickly setting up a working system.
A public user forum is also available where you can leave your questions and feedback about how to use Bonsai.


Ian Dublon, former postdoctoral researcher at the


The program itself is relatively simple and originated out of the real need to rapidly appraise 2d calcium imaging datasets and the frustrations with existing acquisition software relying exclusively on defined regions of interest (ROIs). There is of course nothing wrong with ROIs but it is useful to rapidly appraise the the whole matrix. It really depends on the biological question being asked.

At its most simple, when you open a stacked Tiff, scintillate uses MATLAB’s imabsdiff to look at the absolute difference across the image matrix for successive frames in the stack. Several other more powerful tools, including background pre-stimulus subtraction and ICA (using the excellent FastICA) are provided to help the user ascertain the value of the studied preparation. This helps to confirm and pinpoint areas of signal change, allowing the imager to make adjustments to image acquisition parameters or simply to start afresh.

Written as a GUI in MATLAB and with source and GUIDE files provided alongside a getting started manual, it is designed both for image acquisition people and for MATLAB coders. If the compiler toolbox is present its possible to package a version of scintillate that will run without a MATLAB install present, making it ideal for running on the image acquisition workstation. Simply provide it with an acquired stacked tiff and within three or so clicks and no command line syntax you are ready to go.

Code is hosted at GitHub here: and scintillate uses some freely available 3rd party code which is gladly acknowledged throughout. Scintillate is not designed to replace traditional analysis methods but rather to provide a means of open-source rapid evaluation during the pre-processing stage. It is hoped that by hosting it on GitHub it may be developed further and thus adjusted to suit each persons individual imaging needs.

Feeding Experimentation Device (FED) part 2: new design and code

fed-front3           fed-gif-3

The Feeding Experimentation Device (FED) is a free, open-source system for measuring food intake in rodents. FED uses an Arduino processor, a stepper motor, an infrared beam detector, and an SD card to record time-stamps of 20mg pellets eaten by singly housed rodents. FED is powered by a battery, which allows it to be placed in colony caging or within other experimental equipment. The battery lasts ~5 days on a charge, providing uninterrupted feeding records over this duration.  The electronics for building each FED cost around $150USD, and the 3d printed parts cost between $20 and $400, depending on access to 3D printers and desired print quality.

The Kravitz lab has published a large update of their Feeding Experimentation Device (FED) to their Github site (, including updated 3D design files that print more easily and updates to the code to dispense pellets more reliably.  Step-by-step build instructions are available here:

Quantifying Animal Movement from Pre-recorded Videos

In their 2014 paper, “Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm,” Christopher Madan and Marcia Spetch propose an approach for summarizing animal movement data as a single image (the spectral time-lapse algorithm) as well as automate analysis of animal movement data.


The paper includes an implementation of the algorithm as a Matlab toolbox, available on Github.
As an example application, the toolbox has been used to analyze movement data of pigeons solving the traveling salesperson problem (Baron et al., 2015).

Madan, Christopher; Spetch, Marcia (2014). Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm. F1000Res, 3: 19.

Oculomatic Eye-Tracking

Jan Zimmerman, a postdoc fellow in the Glimcher Lab at New York University has shared the following about Oculomatic:

captureVideo-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive therefore limiting wide-spread use.

Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Our fully modular hardware implementation relies on machine vision USB3 camera systems paired with affordable lens options from the surveillance sector. Our cross platform software implementation (C++) relies on openFrameworks as well as openCV and uses binary image statistics (following Green’s theorem) to compute pupil location and diameter. Oculomatic makes use of data acquisition devices to output eye position as calibrated analog voltages for direct integration into the electrophysiological acquisition chain.

Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (< 0.5°), and low system latency (< 1.8ms) at a relatively low-cost  (< 1000USD). We tested Oculomatic during regular monkey training and task performance in two independent laboratories and compared Oculomatic performance to existing scleral search-coils. While being fully non-invasive, Oculomatic performed favorably to the gold standard of search-coils with only a slight decrease in spatial accuracy.

We propose this system can support a range of research into the properties and neural mechanisms of oculomotor behavior as well as provide an affordable tool to scale non-human primate electrophysiology further.


The most recent version of the software can be found at:

Zimmermann, J., Vazquez, Y., Glimcher, P. W., Pesaran, B., & Louie, K. (2016). Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates. Journal of Neuroscience Methods, 270, 138-146.