Home » Most Recent

Category: Most Recent

An Open Source Automated Bar Test for Measuring Catalepsy in Rats

August 6, 2020

Researchers at the University of Guelph have created a low-cost automated apparatus for measuring catalepsy that increases measurement accuracy and reduces observer bias.


Catalepsy is a measure of muscular rigidity that can result from several factors including Parkinson’s disease, or pharmacological exposure to antipsychotics or cannabis. Catalepsy bar tests are widely used to measure this rigidity. The test consists of placing the arms of a rodent on a horizontal bar that has been raised off the ground and measuring the time it takes for the subject to remove themselves from this imposed posture. Traditionally, this has been measured by an experimenter with a stopwatch, or with prohibitively expensive commercial apparatus that have issues of their own. The automated bar test described here uses a 3D printed base with an Arduino operated design to make the design simple and affordable. This design sets itself apart by using extremely low-cost beam break sensors that avoid pitfalls of the traditional “complete the circuit” approach where changes in rat grip can result in false measurements. The beam break sensors to are used to determine whether the rat is on the bar or not and automatically measures and stores the time the rat takes to remove itself from the bar on an SD card for later retrieval. The device has been validated in rats; however, the bar height is adjustable so there is no reason it cannot be used in other rodents as well. This bar test thus makes catalepsy measures easy, accurate, and limits any experimenter bias due to manual measurements.

Learn more about this project in the recent paper!

Or check out the hackaday project page


Luciani, K. R., Frie, J. A., & Khokhar, J. Y. (2020). An Open Source Automated Bar Test for Measuring Catalepsy in Rats. ENeuro, 7(3). https://doi.org/10.1523/ENEURO.0488-19.2020

PiDose

July 30, 2020

Cameron Woodard has kindly shared the following write up about PiDose, an open source system for oral drug administration to group-housed mice.


“PiDose is an open-source tool for scientists performing drug administration experiments with mice. It allows for automated daily oral dosing of mice over long time periods (weeks to months) without the need for experimenter interaction and handling. To accomplish this, a small 3D-printed chamber is mounted adjacent to a regular mouse home-cage, with an opening in the cage to allow animals to freely access the chamber. The chamber is supported by a load cell, and does not contact the cage but sits directly next to the entrance opening. Prior to treatment, mice have a small RFID capsule implanted subcutaneously, and when they enter the chamber they are detected by an RFID reader. While the mouse is in the chamber, readings are taken from the load cell in order to determine the mouse’s bodyweight. At the opposite end of the chamber from the entrance, a nose-poke port accesses a spout which dispenses drops from two separate liquid reservoirs. This spout is wired to a capacitive touch sensor controller in order to detect licks, and delivers liquid drops in response to licking. Each day, an average weight is calculated for each mouse and a drug dosage is determined based on this weight. When a mouse licks at the spout it dispenses either regular drinking water or a drop of drug solution depending on if they have received their daily dosage or not. All components are controlled by a Python script running on a Raspberry Pi 3B. PIDose is low cost (~$250 for one system) and full build instructions as well as parts for 3D-printing and software can be found online.”

Read more about PiDose in their paper!

Check out the Open Science Framework repository for the project.

Or head over to the Hackaday page to learn more!


Woodard, C. L., Nasrallah, W. B., Samiei, B. V., Murphy, T. H., & Raymond, L. A. (2020). PiDose: An open-source system for accurate and automated oral drug administration to group-housed mice. Scientific Reports, 10(1). doi:10.1038/s41598-020-68477-2

DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila

July 23, 2020

Semih Günel and colleagues have created a deep learning-based pose estimator for studying how neural circuits control limbed behaviors in tethered Drosophila.


Appendage tracking is an important behavioral measure in motor circuit research. Up until now, algorithms for accurate 3D pose estimation in small animals such as Drosophila did not exist. Rather, researchers have had to use alternative approaches such as placing small reflective biomarkers on fly leg segments. While this method is appropriate for larger animals, implementing this strategy in drosophila-sized animals is motion limiting, labor intensive, and cannot estimate 3D information, therefore limiting the accuracy of behavioral measures. DeepFly3D is a PyTorch and PyQT5 based software designed to solve these issues and provide a user-friendly user interface for pose estimation and appendage tracking. DeepFly3D makes of use of supervised deep learning for 2D joint detection and a multicamera setup to iteratively infer 3D poses. This new approach allows for sub-millimeter scale accuracy of automated measurements. Amazingly, DeepFly3D is not limited to drosophila and can be modified to study other animals, such as rodents, primates, and humans. DeepFly3D therefore allows for versatile pose estimation while also permitting an extraordinary level of behavioral detail and accuracy.

Read more in the paper!

Or check out the project’s GitHub!


Günel, S., Rhodin, H., Morales, D., Campagnolo, J., Ramdya, P., & Fua, P. (2019). Deepfly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. ELife, 8. https://doi.org/10.7554/eLife.48571

BonVision

July 16, 2020

In a recent bioRxiv preprint, Gonçalo Lopes and colleagues from NeuroGEARS and University College London have shared information about BonVision — an open source software for creating and controlling visual environments.


With advances in computer gaming and software rendering, it is now possible to create realistic virtual environments. These virtual environments, which can be programmed to react to user input, are useful tools for understanding the neural basis of many behaviors. To expand access to this useful tool, Lopes and colleagues have developed BonVision, a software package for Bonsai which allows for control of 2D and 3D environments. Bonsai is a high-performance, open source, event-based language and which has already been widely used in the neuroscience community for control of closed-loop experiments with compatibility across a flexible range of input and outputs. The software features a modular workflow which allows users to specify parameters in 2D or 3D environments, which can be adapted to a number of display configurations. To demonstrate the utility of BonVision across species and common experimental paradigms, the team performed experiments in human psychophysics, animal behavior, and animal neurophysiology. Overall, this software provides maximal flexibility for application in a variety of experiments across species.

Read more from the preprint here!

Check out the Bonsai programming language here!

Or take a peak at the Github repository for the BonVision project!


Lopes, G., Farrell, K., Horrocks, E. A., Lee, C., Morimoto, M. M., Muzzu, T., . . . Saleem, A. B. (2020). BonVision – an open-source software to create and control visual environments. doi:10.1101/2020.03.09.983775

MNE Scan: Software for real-time processing of electrophysiological data

July 9, 2020

In a 2018 Journal of Neuroscience Methods article, Lorenz Esch and colleagues present MNE Scan, a software that provides real-time acquisition and processing of electrophysiological data.


MNE Scan is a state-of-the-art real-time processing software for clinical MEG and EEG data. By allowing for real-time analysis of neuronal activity, MNE Scan enables the optimization of input stimuli and permits the use of neurofeedback. MNE Scan is based on the open-source MNE-CPP library. Written in C++, MNE-CPP is a software framework that processes standard electrophysiological data formats and is compatible with Windows, Mac, and Linux. Compared to other open-source real-time electrophysiological processing software, MNE Scan is designed to meet medical regulatory requirements such as the IEC 62304. This makes MNE Scan ideal for clinical studies and is already in active use with an FDA approved pediatric MEG system. MNE Scan has also been validated in several different use cases, making it a robust solution for the processing of MEG and EEG data in a variety of scenarios.

Read more in the paper here!

Or check it out right from their website!


OpenBehavior goes international! New members of the team and new content is on the way

July 2, 2020

Exciting news in an otherwise troubled time. This month, the team behind OpenBehavior is expanding and going international. We would like to welcome Jibran Khokhar and Jude Frie to the team. They are based at the University of Guelph and their lab has shared several highly imaginative open source projects to the community. Jibran and another recent member of the team Wambura Fobbs are planning some new content on educational applications of open source tools. Wambura is also leading the development of the new open source video repository. (More on that below.) Jude will be working with Samantha White and Linda Amarante on new content, and will bring his background in electrical engineering to the site.

We hope that with this expanded team we will be able to create more content and also get our new video repository live within the next two weeks. Wambura is working with Josh Wilson and Mark Laubach to get the site rolling. We received very positive responses from the community and will have videos from several sites available for you soon. If anyone would like to contribute videos, please reach out to us at openbehavior@gmail.com.

We will also be able to create longer, more in-depth posts on topics that are useful both for research, training, and education. We have wanted to get some basic tutorials on the core methods needed for working with open source tools posted, and now we have the staff to do so.

In the education space, there are many available options, such as Backyard Brains and new platforms for using recently developed microprocessors for learning about electronics, for example, littleBits. We will soon be disseminating on these tools and also writing about how open source tools are crucial for global literacy in science and for making research affordable and overcoming barriers that exist due to worldwide systemic racism.

A third line of expanded work is on developing RRIDs for open source research tools. OB has a grant pending with the NSF to support this effort, but we would like to get going on it sooner than later. RRIDs were created to enable citations of research tools (without requiring publication of a technical report) and also tracking of batches and variations in things like antibodies. They are maintained by the SciCrunch project at UCSD, and Dr Anita Bandrowski is collaborating with the OB team to support the creation of RRIDs for open source devices and programs used in neuroscience research. We have already created RRIDs for some of the most popular devices that we have posted about on OB. (Check out our paper from the summer of 2019 about those devices: https://www.eneuro.org/content/6/4/ENEURO.0223-19.2019.) We will start creating more RRIDs and will be reaching out to the community to make sure that you know when your device has an RRID. If you would like to discuss this effort with us, please get in touch with us at openbehavior@gmail.com.

In addition, we will be launching a redesign of our website this summer. It will allow for tagging devices more thoroughly and maintaining a database of available projects on the OB website. Sam White and Marty Isaacson (an undergraduate in the Laubach Lab) are working on the redesign and we will share it with you soon.

Finally, we would like to highlight a new effort by André Maia Chagas and his team called Open Neuroscience. They have created a bot for posting about open source tools. The account is on Twitter and is rocking the content. Check it out: https://twitter.com/openneurosci

CerebraLux

June 25, 2020

This week we want to shed some light on a project from Robel Dagnew and colleagues from UCLA called CerebraLux, a wireless system for optogenetic stimulation.


Optogenetic methods have been a crucial tool for understanding the role that certain neural cell populations have in modulating or maintaining a variety of behaviors. This tool requires a light source to be passed through a fiber optic probe, and in many experimental setups this is achieved through a long fiber optic cable to attach the light source to the probe. This long cable can impose limitations on experiments where animals are behaving freely around behavior chambers or mazes. One obvious solution is to deliver light via a wireless controller communicating with a headmounted light source, but existing systems can be cost-prohibitive or to build in a lab  requires access to specialized manufacturing equipment. To address the need for a a low-cost wireless optogenetic probe, Dagnew and colleagues developed CerebraLux which is built using off-the-shelf and accessible custom parts. This device consists of two major components: the optic component which features a milled baseplate capable of holding and connecting an optic fiber and LED (a part of the electronic portion); and the electronic component which features a custom-printed circuit board (PCB), lithium battery, IR receiver, LED, and magnets to align and connect the two components of the device. The device is controlled via a custom GUI (built with the TkInter Python 2.7 library) which sends pulses to the device via an Arduino Uno. More details about the build of these components and the process for communicating with the device via the GUI are available in Dagnew et. al. The CerebraLux design and operations manual, which includes access to the 3D design files for the milled parts, the print design for the PCB, and code for communicating with the device, is available in the appendix of the paper, while the code for the GUI is available from the Walwyn Lab website. Be sure to check out the paper for information about how they validated the device in vivo. The cost of all the component parts (as of 2017) comes in just under $200, providing to be a cost-effective solution for labs seeking a wireless optogenetic probe.

Read more about CerebraLux here!


Dagnew, R., Lin, Y., Agatep, J., Cheng, M., Jann, A., Quach, V., . . . Walwyn, W. (2017). CerebraLux: A low-cost, open-source, wireless probe for optogenetic stimulation. Neurophotonics, 4(04), 1. doi:10.1117/1.nph.4.4.045001

Rodent Arena Tracker (RAT)

June 18, 2020

Jonathan Krynitsky and colleagues from the Kravitz lab at Washington University have constructed and shared RAT, a closed loop system for machine vision rodent tracking and task control.


The Rodent Arena Tracker, or RAT, is a low cost wireless position tracker for automatically tracking mice in high contrast arenas. The device can use subject position information to control other devices in real time, allowing for closed loop control of various tasks based on positional behavior data. The device is based on the OpenMV Cam M7 (openmv.io), an opensource machine vision camera equipped with onboard processing for real-time analysis which reduces data storage requirements and removes the need for an external computer. The authors optimized the control code for tracking mice and created a custom circuit board to run the device off a battery and include a real-time clock for synchronization, a BNC input/output port, and a push button for starting the device. The build instructions for RAT, as well as validation data to highlight effectiveness and potential uses for the device are available in their recent publication. Further, all the design files, such as the PCB design, 3D printer files, python code, etc, are available on hackaday.io.

Read the full article here!

Or check out the project on hackaday.io!


Krynitsky, J., Legaria, A. A., Pai, J. J., Garmendia-Cedillos, M., Salem, G., Pohida, T., & Kravitz, A. V. (2020). Rodent Arena Tracker (RAT): A Machine Vision Rodent Tracking Camera and Closed Loop Control System. Eneuro, 7(3). doi:10.1523/eneuro.0485-19.2020

 

Video Repository Initiative on OpenBehavior

June 11, 2020

Last fall when teaching an undergraduate course on computational methods in neuroscience at American University, we wanted to bring in some of the tools for video analysis that have been promoted on OpenBehavior. The idea was to introduce these tools at the end of the course, after the students had learned a bit about Python, Anaconda, Jupyter, Arduinos, etc. We decided on using ezTrack from the Cai Lab as it is written in Python and uses Jupyter notebooks. It was easy to prepare for this topic until we realized that we needed simple videos for tracking. Those from our lab are from operant chambers illuminated with infrared LEDs and require a good bit of preprocessing to be suited for analysis with simple tracking algorithms. In addition, we use Long-Evans rats in our studies and they are a challenge to track given their coloring. So, we looked around the web for example videos and were surprised by the lack of sharing example videos by labs who have developed, published with, and promoted tools for video analysis. Most videos that we found show the results of tracking and did not provide raw video data. We did find a nice example of open-field behavior by mice (Samson et al., 2015), and used the supplemental videos from this now 5 year old paper for the course.

These experiences made us wonder if having a collection of videos for teaching and training would be useful to the community. A collection of video recordings of animals engaged in standard neuroscience behavioral tasks (e.g. feeding, foraging, fear conditioning, operant learning) would be useful for educational purposes, e.g. students could read published papers to understand the experimental design and then analyze data from the published studies using modifications of available tutorial code for packages such as ezTrack or others. For researchers, these same videos would be useful for reproducing analyses from published studies, and quickly learning how to use published code to analyze their own data. Furthermore, with the development of tools that use advanced statistical methods for video analysis (e.g. DeepLabCut, B-SOiD), it seems warranted to have a repository available that could be used to benchmark algorithms and explore their parameter space. One could even envision an analysis competition using standard benchmark videos similar to what is available in the field of machine learning, and that have had impact on the development of powerful algorithms that go well beyond the performance of those that were available only a decade ago (e.g. xgboost).

So we are posting today to ask for community participation in the creation of a video repository. The plan is to post license-free videos to the OpenBehavior Google Drive account. Our OpenBehavior team will convert the files to a standard (mp4) format and post links to the videos on the OpenBehavior website, so they will be accessible to the community. The website will list the creator of the video file, the camera and software used for the recording, the resolution, frame rate and duration of recording, the species and information on the behavioral experiment (and a link to the publication or preprint if the work is from a manuscript).

For studies in rodents, we are especially interested in videos showing overhead views from open-field and operant arena experiments and close-up videos of facial reactions, eyeblinks, oral movements and limb reaching. We are happy to curate videos from other species (fish, birds, monkeys, people) as well.

If you are interested in participating, please complete the form on this page or reach out to us via email at openbehavior@gmail.com or Twitter at @OpenBehavior.

 

SpikeForest

May 21, 2020

Hot off the eLife press, Jeremy Magland and colleagues have shared SpikeForest, a tool for validating automated neural spike sorters.


Spike sorting is a crucial step in neural data analysis. Manual spike sorting is time consuming and sensitive to human error, so much effort has been placed into developing automated algorithms to perform this necessary step. However, even with rapid development and sharing of these tools, there is little information to guide researchers for which algorithm may best serve their needs and that it offers the accuracy needed to give a complete scope of the data. To address this, Magland and colleagues across 11 research groups have developed and contributed data for SpikeForest. This python based software suite utilizes a large database of ephys recordings featuring ground truth units (units that have spike patterns known a priori), a parallel processing pipeline to benchmark algorithm performance, and a web interface for users to explore results. This tool can be used to assess which algorithm works best to extract data from different recording and experimental methods (in vivo, ex vivo, tetrode, etc) and provides accurate evaluation metrics for comparison. Information about the spike sorting algorithms that SpikeForest can compare are available in the recent publication, as well as a preliminary comparison of these algorithms based on community provided datasets. The SpikeForest Interface also allows users to sort their own data with a few modifications to the code, which is discussed in the publication. Be sure to check it out!

Read about SpikeForest here!

Or explore the SpikeForest web interface here!