Category: All

Feeding Experimentation Device (FED) part 2: new design and code

fed-front3           fed-gif-3

The Feeding Experimentation Device (FED) is a free, open-source system for measuring food intake in rodents. FED uses an Arduino processor, a stepper motor, an infrared beam detector, and an SD card to record time-stamps of 20mg pellets eaten by singly housed rodents. FED is powered by a battery, which allows it to be placed in colony caging or within other experimental equipment. The battery lasts ~5 days on a charge, providing uninterrupted feeding records over this duration.  The electronics for building each FED cost around $150USD, and the 3d printed parts cost between $20 and $400, depending on access to 3D printers and desired print quality.

The Kravitz lab has published a large update of their Feeding Experimentation Device (FED) to their Github site (https://github.com/KravitzLab/fed), including updated 3D design files that print more easily and updates to the code to dispense pellets more reliably.  Step-by-step build instructions are available here: https://github.com/KravitzLab/fed/wiki

Quantifying Animal Movement from Pre-recorded Videos

In their 2014 paper, “Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm,” Christopher Madan and Marcia Spetch propose an approach for summarizing animal movement data as a single image (the spectral time-lapse algorithm) as well as automate analysis of animal movement data.

193aaf33-a169-4c5d-baa3-e400f0da1a85_figure2

The paper includes an implementation of the algorithm as a Matlab toolbox, available on Github.
As an example application, the toolbox has been used to analyze movement data of pigeons solving the traveling salesperson problem (Baron et al., 2015).

Madan, Christopher; Spetch, Marcia (2014). Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm. F1000Res, 3: 19.

Oculomatic Eye-Tracking

Jan Zimmerman, a postdoc fellow in the Glimcher Lab at New York University has shared the following about Oculomatic:

captureVideo-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive therefore limiting wide-spread use.

Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Our fully modular hardware implementation relies on machine vision USB3 camera systems paired with affordable lens options from the surveillance sector. Our cross platform software implementation (C++) relies on openFrameworks as well as openCV and uses binary image statistics (following Green’s theorem) to compute pupil location and diameter. Oculomatic makes use of data acquisition devices to output eye position as calibrated analog voltages for direct integration into the electrophysiological acquisition chain.

Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (< 0.5°), and low system latency (< 1.8ms) at a relatively low-cost  (< 1000USD). We tested Oculomatic during regular monkey training and task performance in two independent laboratories and compared Oculomatic performance to existing scleral search-coils. While being fully non-invasive, Oculomatic performed favorably to the gold standard of search-coils with only a slight decrease in spatial accuracy.

We propose this system can support a range of research into the properties and neural mechanisms of oculomotor behavior as well as provide an affordable tool to scale non-human primate electrophysiology further.

img_0820


The most recent version of the software can be found at: https://github.com/oculomatic/oculomatic-release.


Zimmermann, J., Vazquez, Y., Glimcher, P. W., Pesaran, B., & Louie, K. (2016). Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates. Journal of Neuroscience Methods, 270, 138-146.

UCLA Miniscope Project

Daniel Aharoni of the Golshani, Silva, and Khakh Lab at UCLA has shared the following about Miniscope:


miniscope1
This open source miniature fluorescence microscope uses wide-field fluorescence imaging to record neural activity in awake, freely behaving mice. The Miniscope has a mass of 3 grams and uses a single, flexible coaxial cable (0.3mm to 1.5mm diameter) to carry power, control signals, and imaging data to open source Data Acquisition (DAQ) hardware and software. Miniscope.org provides a centralized location for sharing design files, source code, and other relevant information so that a community of users can share ideas and developments related to this important imaging technique. Our goal is to help disseminate this technology to the larger neuroscience community and build a foundation of users that will continue advancing this technology and contribute back to the project. While the Miniscope system described here is not an off-the-shelf commercial solution, we have focused on making it as easy as possible for the average neuroscience lab to build and modify, requiring minimal soldering and hands on assembly.
miniscope2
Video demonstrating GCamp6F imaging in CA1 using the UCLA Miniscope

Laubach Lab GitHub Repository

The Laubach Lab at American University investigates executive control and decision making, focusing on the role of the prefrontal cortex. Through their GitHub repository, these researchers provide 3D print files for many of the behavioral devices used in their lab, including a Nosepoke and a Lickometer designed from rats. The repository also includes a script that reads MedPC files into Python in a usable way.

Hao Chen lab, UTHSC – openBehavior repository

The openBehavior github repository from Hao Chen’s lab at UTHSC aims to establish a computing platform for rodent behavior research using the Raspberry Pi computer. They have buillt several devices for conducting operant conditioning and monitoring enviornmental data.

The operant licking device can be placed in a standard rat home cage and can run fixed ratio, various ratio, or progressive ratio schedules. A preprint describing this project, including data on sucrose vs water intake is available. Detailed instructions for making the device is also provided.

The environment sensor can record the temperature, humidity, barometric pressure, and illumination at fixed time intervals and automatically transfer the data to a remote server.

There is also a standard alone RFID reader for the EM4100 implantable glass chips, a motion sensor addon for standard operant chambers, and several other devices.

Wave Surfer

WaveSurfer is an open-source application for neurophysiology data acquisition and analysis. The program is based in MatLab, and evolved from an earlier open-source software package called Ephus. WaveSurfer is currently pre-release, but can be downloaded from the WaveSurfer Webpage or the WaveSurfer GitHub Repository.

The project was initiated by the Svoboda Lab, and developed as a collaborative effort between several research groups at the Howard Hughes Medical Institute’s Janelia Research Campus. Janelia is a major proponent of collaboration and open-science, providing documentation for dozens of tools and innovations developed on their campus through their webpage, including several tools specific to behavioral neuroscience research.

 

Automated Home-Cage Functional Imaging

Timothy Murphy and his colleagues at the University of British Columbia have developed an automated system for mesoscopic functional imaging that allows subjects to self-initiate head-fixation and imaging within the home-cage. In their 2016 paper, “High-throughput automated home-cage mesoscopic functional imaging of mouse cortex,” Dr. Murphy and his colleagues present this device and demonstrate its use with a group of calcium indicator transgenic mice. The supplementary material to this paper includes a diagram of the hardware, a graphic representation of the training cage, several videos of subjects interacting with the device and sample imaging data. The Python source code and 3D print files can be found on Dr. Murphey’s UBC webpage.

Murphy, T. H., Boyd, J. D., Bolaños, F., Vanni, M. P., Silasi, G., Haupt, D., & LeDue, J. M. (2016). High-throughput automated home-cage mesoscopic functional imaging of mouse cortex. Nature Communications, 7, 11611.

Nose-Poke System – Kelly Tan Research Group

The Kelly Tan research group at the University of Basel, Switzerland investigates the neural correlates of motor behavior, focusing on the role of the basal ganglia in controlling various aspects of motor actions. To aid in their investigation, the group has developed an open-source nose-poke system utilizing an Arduino microcontroller, several low-cost electronic components, and a PVC behavioral arena. These researchers have shared the following information about the project:

Giorgio Rizzi, Meredith E. Lodge, Kelly R Tan.
MethodsX 3 (2016) 326-332
Operant behavioral tasks for animals have long been used to probe the function of multiple brain regions. The recent development of tools and techniques has opened the door to refine the answer to these same questions with a much higher degree of specificity and accuracy, both in biological and spatial-temporal domains. A variety of systems designed to test operant behavior are now commercially available, but have prohibitive costs. Here, we provide a low-cost alternative to a nose poke system for mice. Adapting a freely available sketch for ARDUINO boards, in combination with an in-house built PVC box and inexpensive electronic material we constructed a four-port nose poke system that detects and counts port entries.
  • We provide a low cost alternative to commercially available nose poke system.
  • Our custom made apparatus is open source and TTL compatible.
  • We validate our system with optogenetic self-stimulation of dopamine neurons in mice.

IMG_20160126_163648 IMG_20160126_163703 IMG_20160126_163714 IMG_20160126_163725 IMG_20160127_124611 IMG_20160127_124624 IMG_20160127_150643 IMG_20160127_150708


The Kelly Tan research group provides further documentation for this device, including SketchUp design files, Arduino source code, and a full bill of materials, as supplementary data in their 2016 paper.

ArduiPod Box

ArduiPod Box is a simple, comprehensive touchscreen-based operant conditioning chamber that utilizes an iPod Touch in conjunction with an Arduino microcontroller to present visual and auditory stimuli, record behavior in the form of nose-pokes or screen touches, and deliver liquid reward. In his 2014 paper, Oskar Pineño introduces ArduinoPod Box and demonstrates the use of the device in a visual discrimination task.

ArduiPod Box relies on an open-source iOS app named Shaping that can be downloaded for free at the iTunes store, as well as, on Dr. Pineno’s website. Detailed instructions for assembly of the ArduiPod Box are also detailed on the website. In addition, video demonstrating of ArduiPod can be found here.

13428_2013_367_Fig2b_HTML

 


Pineño, Oskar (2014). ArduiPod Box: a low-cost and open-source Skinner box using an iPod Touch and an Arduino microcontroller. Behav Res Methods. 46(1): 196–205