Category: All

Ultrasonic Vocalization (USV) Detector

December 21, 2016

David Barker from the National Institute on Drug Abuse Intramural Research Program has shared the following regarding the development of a device designed to allow the automatic detection of 50kHz ultrasonic vocalizations.


Ultrasonic vocalizations (USVs) have been utilized to infer animals’ affective states in multiple research paradigms including animal models of drug abuse, depression, fear or anxiety disorders, Parkinson’s disease, and in studying neural substrates of reward processing. Currently, the analysis of USV data is performed manually, and thus time consuming.

The present method was developed in order to allow for the automated detection of 50-kHz ultrasonic vocalizations using a template detection procedure. The detector runs in XBAT, an extension developed for MATLAB developed by the Bioacoustics Research Program at Cornell University. The specific template detection procedure for ultrasonic vocalizations along with a number of companion tools were developed and tested by our laboratory. Details related to the detector’s performance can be found within our published work and a detailed readme file is published along with the MATLAB package on our GitHub.

Our detector was designed to be freely shared with the USV research community with the hope that all members of the community might benefit from its use. We have included instructions for getting started with XBAT, running the detector, and developing new analysis tools. We encourage users that are familiar with MATLAB to develop and share new analysis tools. To facilitate this type of collaboration, all files have been shared as part of a GitHub repository, allowing for suggested changes or novel contribution to be made to the software package. I would happily integrate novel analysis tools created by others into future releases of the detector.

Work on a detector for 22-kHz vocalizations is ongoing; the technical challenges for detecting 22-kHz vocalizations, which are nearer to audible noise, are more difficult. Those interested in contributing to this can email me at djamesbarker@gmail-dot-com or find me on twitter (@DavidBarker_PhD).


Behavioral Observation Research Interactive Software (BORIS)

Olivier Friard of The University of Turin has generously shared the following about BORIS:


BORIS is an open-source, multiplatform standalone program that allows a user-specific coding environment for a computer-based review of previously recorded videos or live observations. The program allows defining a highly-customised, project-based ethogram that can then be shared with collaborators or can be imported or modified. Once the coding process is completed, the program can extract a time-budget or single or grouped observations automatically and present an at-a-glance summary of the main behavioural features. The observation data and analyses can be exported in various formats (e.g.; CSV or XLS). The behavioural events can also be plotted and exported in various graphic formats (e.g.; PNG, EPS, and PDF).

BORIS is currently used around the world for strikingly different approaches to the study of animal and human behaviour.

The latest version of BORIS can be downloaded from www.boris.unito.it, where the manual and some tutorials are also available.

observation_running  ethogram


Friard, O. and Gamba, M. (2016), BORIS: a free, versatile open-source
event-logging software for video/audio coding and live observations.
Methods Ecol Evol, 7: 1325–1330. doi:10.1111/2041-210X.12584

Bonsai

Gonçalo Lopez of the Champalimaud Neuroscience Programme has shared the following about Bonsai:

Bonsai is an open-source visual programming language for composing reactive and asynchronous data streams coming from video cameras, microphones, electrophysiology systems or data acquisition boards. It gives the experimenter an easy way not only to collect data from all these devices simultaneously, but also to quickly and flexibly chain together real-time processing pipelines that can be fed back to actuator systems for closed-loop stimulation using sound, images or other kinds of digital and analog output.
Bonsai is being used around the world to design all kinds of behavioral experiments from eye-tracking to virtual reality, spanning multiple model organisms from fish to humans.

The latest version of Bonsai can be downloaded from BitBucket, along with instructions for quickly setting up a working system.
A public user forum is also available where you can leave your questions and feedback about how to use Bonsai.

Scintillate

Ian Dublon, former postdoctoral researcher at the


scintillate

The program itself is relatively simple and originated out of the real need to rapidly appraise 2d calcium imaging datasets and the frustrations with existing acquisition software relying exclusively on defined regions of interest (ROIs). There is of course nothing wrong with ROIs but it is useful to rapidly appraise the the whole matrix. It really depends on the biological question being asked.

At its most simple, when you open a stacked Tiff, scintillate uses MATLAB’s imabsdiff to look at the absolute difference across the image matrix for successive frames in the stack. Several other more powerful tools, including background pre-stimulus subtraction and ICA (using the excellent FastICA) are provided to help the user ascertain the value of the studied preparation. This helps to confirm and pinpoint areas of signal change, allowing the imager to make adjustments to image acquisition parameters or simply to start afresh.

Written as a GUI in MATLAB and with source and GUIDE files provided alongside a getting started manual, it is designed both for image acquisition people and for MATLAB coders. If the compiler toolbox is present its possible to package a version of scintillate that will run without a MATLAB install present, making it ideal for running on the image acquisition workstation. Simply provide it with an acquired stacked tiff and within three or so clicks and no command line syntax you are ready to go.

Code is hosted at GitHub here: github.com/dublon/scintillate and scintillate uses some freely available 3rd party code which is gladly acknowledged throughout. Scintillate is not designed to replace traditional analysis methods but rather to provide a means of open-source rapid evaluation during the pre-processing stage. It is hoped that by hosting it on GitHub it may be developed further and thus adjusted to suit each persons individual imaging needs.

Feeding Experimentation Device (FED) part 2: new design and code

fed-front3           fed-gif-3

The Feeding Experimentation Device (FED) is a free, open-source system for measuring food intake in rodents. FED uses an Arduino processor, a stepper motor, an infrared beam detector, and an SD card to record time-stamps of 20mg pellets eaten by singly housed rodents. FED is powered by a battery, which allows it to be placed in colony caging or within other experimental equipment. The battery lasts ~5 days on a charge, providing uninterrupted feeding records over this duration.  The electronics for building each FED cost around $150USD, and the 3d printed parts cost between $20 and $400, depending on access to 3D printers and desired print quality.

The Kravitz lab has published a large update of their Feeding Experimentation Device (FED) to their Github site (https://github.com/KravitzLab/fed), including updated 3D design files that print more easily and updates to the code to dispense pellets more reliably.  Step-by-step build instructions are available here: https://github.com/KravitzLab/fed/wiki

Quantifying Animal Movement from Pre-recorded Videos

In their 2014 paper, “Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm,” Christopher Madan and Marcia Spetch propose an approach for summarizing animal movement data as a single image (the spectral time-lapse algorithm) as well as automate analysis of animal movement data.

193aaf33-a169-4c5d-baa3-e400f0da1a85_figure2

The paper includes an implementation of the algorithm as a Matlab toolbox, available on Github.
As an example application, the toolbox has been used to analyze movement data of pigeons solving the traveling salesperson problem (Baron et al., 2015).

Madan, Christopher; Spetch, Marcia (2014). Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm. F1000Res, 3: 19.

Oculomatic Eye-Tracking

Jan Zimmerman, a postdoc fellow in the Glimcher Lab at New York University has shared the following about Oculomatic:

captureVideo-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive therefore limiting wide-spread use.

Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Our fully modular hardware implementation relies on machine vision USB3 camera systems paired with affordable lens options from the surveillance sector. Our cross platform software implementation (C++) relies on openFrameworks as well as openCV and uses binary image statistics (following Green’s theorem) to compute pupil location and diameter. Oculomatic makes use of data acquisition devices to output eye position as calibrated analog voltages for direct integration into the electrophysiological acquisition chain.

Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (< 0.5°), and low system latency (< 1.8ms) at a relatively low-cost  (< 1000USD). We tested Oculomatic during regular monkey training and task performance in two independent laboratories and compared Oculomatic performance to existing scleral search-coils. While being fully non-invasive, Oculomatic performed favorably to the gold standard of search-coils with only a slight decrease in spatial accuracy.

We propose this system can support a range of research into the properties and neural mechanisms of oculomotor behavior as well as provide an affordable tool to scale non-human primate electrophysiology further.

img_0820


The most recent version of the software can be found at: https://github.com/oculomatic/oculomatic-release.


Zimmermann, J., Vazquez, Y., Glimcher, P. W., Pesaran, B., & Louie, K. (2016). Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates. Journal of Neuroscience Methods, 270, 138-146.

UCLA Miniscope Project

Daniel Aharoni of the Golshani, Silva, and Khakh Lab at UCLA has shared the following about Miniscope:


miniscope1
This open source miniature fluorescence microscope uses wide-field fluorescence imaging to record neural activity in awake, freely behaving mice. The Miniscope has a mass of 3 grams and uses a single, flexible coaxial cable (0.3mm to 1.5mm diameter) to carry power, control signals, and imaging data to open source Data Acquisition (DAQ) hardware and software. Miniscope.org provides a centralized location for sharing design files, source code, and other relevant information so that a community of users can share ideas and developments related to this important imaging technique. Our goal is to help disseminate this technology to the larger neuroscience community and build a foundation of users that will continue advancing this technology and contribute back to the project. While the Miniscope system described here is not an off-the-shelf commercial solution, we have focused on making it as easy as possible for the average neuroscience lab to build and modify, requiring minimal soldering and hands on assembly.
miniscope2
Video demonstrating GCamp6F imaging in CA1 using the UCLA Miniscope

Laubach Lab GitHub Repository

The Laubach Lab at American University investigates executive control and decision making, focusing on the role of the prefrontal cortex. Through their GitHub repository, these researchers provide 3D print files for many of the behavioral devices used in their lab, including a Nosepoke and a Lickometer designed from rats. The repository also includes a script that reads MedPC files into Python in a usable way.

Hao Chen lab, UTHSC – openBehavior repository

The openBehavior github repository from Hao Chen’s lab at UTHSC aims to establish a computing platform for rodent behavior research using the Raspberry Pi computer. They have buillt several devices for conducting operant conditioning and monitoring enviornmental data.

The operant licking device can be placed in a standard rat home cage and can run fixed ratio, various ratio, or progressive ratio schedules. A preprint describing this project, including data on sucrose vs water intake is available. Detailed instructions for making the device is also provided.

The environment sensor can record the temperature, humidity, barometric pressure, and illumination at fixed time intervals and automatically transfer the data to a remote server.

There is also a standard alone RFID reader for the EM4100 implantable glass chips, a motion sensor addon for standard operant chambers, and several other devices.