Home » Most Recent

Category: Most Recent

Open-Source Neuroscience Hardware Hack Chat

February 13, 2020

This week we would like to highlight an event hosted by Hackaday.io: The Open-Source Neuroscience Hardware Hack Chat. Lex Kravitz and Mark Laubach will be available on Wednesday, February 19, 2020 at noon Pacific Time to chat with users of Hackaday.io about open-source tools for neuroscience research.

In case you don’t know, Hackaday.io is really awesome project hosting site. Many open-source projects are hosted there that can teach you about microcontrollers, 3D printing, and other makerspace tools. It is so easy to find new ideas for projects and helpful build instructions for your projects on Hackaday.io.

We have previously posted about several popular projects that are hosted on Hackaday.io, such as SignalBuddy, PhotometryBox, and FED. (By the way, FED is now offered through collaboration between the OpenBehavior and OpenEphys projects: https://open-ephys.org/fed3/fed3.) But there are a number of other interesting projects hosted on Hackaday.io that are worth a look.

For example, FORCE (Force Output of Rodent Calibrated Effort) was developed by Bridget Matikainen-Ankney. It can be used in studies with response force controlling reward delivery in behaving rodents.

Another interesting project is the LabRATory Telepresence Robot developed by Brett Smith. It is a robotic system that allows for motion correction in imaging studies done in behaving mice using trackball setups.

Two other cool projects on Hackaday.io provide tools for studying behavior in electric fish, an electric fish detector by Michael Haag and the electric fish piano by Davis Catolico. The electric fish piano can be used to listen to, record, and manipulate the electrical tones made by these kinds of fish.

Finally, there are a couple of projects that could be useful for research and teaching labs, including a project on measuring jumping behavior by grasshoppers by Dieu My Nguyen and a rig for recording central pattern generators in snails by Nancy Sloan.

Check out these projects and let us know what you think! And hope to chat with you next Wednesday.

Open-Source Neuroscience Hardware Hack Chat


LINK: https://hackaday.io/event/169511-open-source-neuroscience-hardware-hack-chat

Camera Control

February 6, 2020

The Adaptive Motor Control Lab at Harvard recently posted their project, Camera Control, a python based camera software GUI, to Github.


Camera Control is an open-source software package written by postdoctoral fellow Gary Kane that allows video to be recorded in sync with behavior. The python GUI and scripts allows investigators to record from multiple imaging source camera feeds with associated timestamps for each frame. When used in combination with a NIDAQ card, timestamps from a behavioral task can also be recorded on the falling edge of a TTL signal. This allows video analysis to be paired with physiological recording which can be beneficial in assessing behavioral results. This package requires Windows 10, Anaconda, and Git, and is compatible with Imaging Source USB3 cameras. The software package is accessible for download from the lab’s github and instructions for installation and video recording are provided.

Find more on Github.


Kane, G. & Mathis, M. (2019). Camera Control: record video and system timestamps from Imaging Source USB3 cameras. GitHub. https://zenodo.org/badge/latestdoi/200101590

Rigbox: an open source toolbox for probing neurons and behavior

January 30, 2020

In a recent preprint, Jai Bhagat, Miles J. Wells and colleagues shared a toolbox, developed by Christopher Burgess, for streamlining behavioral neuroscience experiments.


In behavioral neuroscience, it’s important to keep track of both behavioral data and neural data, and have it done so in a way that makes analysis simpler later on. One of the best ways to achieve this is by having a centralized system for running behavioral and neural recording software while streaming all the data. To address this, Burgess and team developed Rigbox, a high-performance, open-source software toolbox that facilitates a modular approach to designing experiments. Rigbox runs in MATLAB (with some Java and C for network communication and processing speed improvements), and its main submodule, Signals, allows intuitive programming of behavioral tasks. While it was originally developed for behavioral analysis from mice in a steering wheel driven task, the authors show its feasibility for human behavioral tasks (psychophysics & pong game), highlighting the broad array of ways this toolbox can be used in neuroscience.
For more, check out the full preprint!
Or jump right in on Github.


SimBA

JANUARY 23, 2020

Simon Nilsson from Sam Golden’s lab at the University of Washington recently shared their project SimBA (Simple Behavioral Analysis), an open source pipeline for the analysis of complex social behaviors:


“The manual scoring of rodent social behaviors is time-consuming and subjective, impractical for large datasets, and can be incredibly repetitive and boring. If you spend significant time manually annotating videos of social or solitary behaviors, SimBA is an open-source GUI that can automate the scoring for you. SimBA does not require any specialized equipment or computational expertise.

SimBA uses data from popular open-source tracking tools in combination with a small amount of behavioral annotations to create supervised machine learning classifiers that can then rapidly and accurately score behaviors across different background settings and lighting conditions. Although SimBA is developed and validated for complex social behaviors such as aggression and mating, it has the flexibility to generate classifiers in different environments and for different behavioral modalities. SimBA takes users through a step-by-step process and we provide detailed installation instructions and tutorials for different use case scenarios online. SimBA has a range of in-built tools for video pre-processing, accessing third-party tracking models, and evaluating the performance of machine learning classifiers. There are also several methods for in-depth visualizations of behavioral patterns. Because of constraints in animal tracking tools, the initial release of SimBA is limited to processing social interactions of differently coat colored animals, recorded from a top down view, and future releases will advance past these limitations. SimBA is very much in active development and a manuscript is in preparation. Meanwhile, we are very keen to hear from users about potential new features that would advance SimBA and help in making automated behavioral scoring accessible to more researchers in behavioral neuroscience.”


For more information on SimBA, you can check out the project’s Github page here.

For those looking to contribute or try out SimBA and are looking for feedback, you can interact on the project’s Gitter page.

Plus, take a look at their recent twitter thread detailing the project.

If you would like to be added to the project’s listserv for updates, fill out this form here.

 

B-SOiD

January 16, 2020

Eric Yttri from Carnegie Mellon University has shared the following about B-SOiD, an open source unsupervised algorithm for discovery of spontaneous behaviors:


“Capturing the performance of naturalistic behaviors remains a tantalizing but prohibitively difficult field of study – current methods are difficult, expensive, low temporal resolution, or all of the above. Recent machine learning applications have enabled localization of limb position; however, position alone does not yield behavior. To provide a high temporal resolution bridge from positions to actions and their kinematics, we developed Behavioral Segmentation of Open-field In DeepLabCut, or B-SOiD. B-SOiD is an unsupervised learning algorithm that discovers and classifies actions based on the inherent statistics of the data points of the data points provided (including any marker or markerless system, not just deeplabcut). Our algorithm enables the automated segregation of different, sub-second behaviors with a single bottom-up perspective video camera – and does so without considerable effort or potential bias from the user. This open-source platform opens the door to the efficient study of spontaneous behavior and its neural mechanisms. It also readily provides critical behavioral metrics that historically have been difficult to quantify, such as grooming and stride-length in OCD and stroke research.”


Code available: https://github.com/YttriLab/B-SOID
Preprint available: https://www.biorxiv.org/content/10.1101/770271v1.full


RatHat: A self-targeting printable brain implant system

JANUARY 9, 2020

Leila Allen and colleagues in Tim Allen’s lab at Florida International University recently developed RatHat, a self-targeting printable brain implant system. Below they describe their project:


“There has not been a major change in how neuroscientists approach stereotaxic methods in decades. Here we present a new stereotaxic method that improves on traditional approaches by reducing costs, training, surgical time, and aiding repeatability. The RatHat brain implantation system is a 3D printable stereotaxic device for rats that is fabricated prior to surgery and fits to the shape of the skull. RatHat builds are directly implanted into the brain without the need for head-leveling or coordinate-mapping during surgery. The RatHat system can be used in conjunction with the traditional u-frame stereotaxic device, but does not require the use of a micromanipulator for successful implantations. Each RatHat system contains several primary components including the implant for mounting intracranial components, the surgical stencil for targeting drill sites, and the protective cap for impacts and debris. Each component serves a unique function and can be used together or separately. We demonstrate the feasibility of the RatHat system in four different proof-of-principle experiments: 1) a 3-pole cannula apparatus, 2) an optrode-electrode assembly, 3) a fixed-electrode array, and 4) a tetrode hyperdrive. Implants were successful, durable, and long-lasting (up to 9 months). RatHat print files are easily created, can be modified in CAD software for a variety of applications, and are easily shared, contributing to open science goals and replications. The RatHat system has been adapted to multiple experimental paradigms in our lab and should be a useful new way to conduct stereotaxic implant surgeries in rodents.

RatHat is freely available to academic researchers, achieving open science goals. Academic and non-profit researchers interested in receiving the 3D files can contact Dr. Timothy Allen (tallen@fiu.edu). We will first provide you a simple noncommercial license to be executed by your institution, and upon completion, printable and editable 3D files of the implant system. Our responses are fast, and all files are provided immediately after receiving the aforementioned document. Our goals are noncommercial, and our only interests are to share RatHat as widely as possible in support of our open science goals and to improve the pace of discovery using chronic brain implant systems for behavioral studies.”

 


For more details, you can check out the preprint here.

 

Autopilot

DECEMBER 12, 2019

Jonny Saunders from Michael Wehr’s lab at the University of Oregon recently posted a preprint documenting their project Autopilot, which is a python framework for running behavioral experiments:


Autopilot is a python framework for behavioral experiments through utilizing Raspberry Pi microcontrollers. Autopilot incorporates all aspects of an experiment, including the hardware, stimuli, behavioral task paradigm, data management, data visualization, and a user interface. The authors propose that Autopilot is the fastest, least expensive, most flexibile behavioral system that is currently available.

The benefit of using Autopilot is that it allows more experimental flexibility, which lets researchers to optimize it for their specific experimental needs. Additionally, this project exemplifies how useful a raspberry pi can be for performing experiments and recording data. The preprint discusses many benefits of raspberry pis, including their speed, precision and proper data logging, and they only cost $35 (!!). Ultimately, the authors developed Autopilot in an effort to encourage users to write reusable, portable experiments that is put into a public central library to push replication and reproducibility.

 

For more information, check out their presentation or the Autopilot website here.

Additionally documentation is here, along with a github repo, and a link to their preprint is here.


Oat: online animal tracker

December 5, 2019

Jonathan Newman of the Wilson Lab at Massachusetts Institute of Technology has developed and shared a set of programs for processing video.


The only thing you’ll enjoy more than an oat milk latte from your favorite coffeeshop is the Oat collection of video processing components designed for use with Linux! Developed by Jonathan Newman of Open Ephys, this set of programs is useful for processing video, extracting object position information, and streaming data. While it was designed for use with real-time animal position tracking, it can be used in multiple applications that require real-time object tracking. The individual Oat components each have a standard interface that can be chained together to create complex dataflow networks for capturing  processing, and recording video streams.

Read more about Oat on the Open Ephys website, or check out more on Github!


https://open-ephys.org/oat

Touchscreen Cognition and MouseBytes

NOVEMBER 21, 2019

Tim Bussey and Lisa Saksida from Western University and the BrainsCAN group developed touchscreen device chambers that can be used to measure rodent behavior. While the touchscreens themselves are not an open-source device, we appreciate the open-science push for creating a user community, performing workshops and tutorials, and data sharing. Most notably, their sister project, MouseBytes, is an open-access database for all cognitive data collected from the touchscreen-related tasks:


Touchscreen History:

In efforts to develop a cognitive testing method for rodents that would optimally reflect a touchscreen testing method in humans, Bussey et al., (1994, 1997a,b) developed a touchscreen apparatus for rats, which was subsequently adapted for mice as well. In short, the touchscreens allow for computer-aided graphics to be presented to a rodent and the rodent can make choices in a task based on which stimuli appear. The group published a “tutorial” paper detailing the behavior and proper training methods to get rats to perform optimally using these devices (Bussey et al., 2008). Additionally, in 2013, three separate Nature Protocols articles were published by this group, with details on how to use the touchscreens in tasks assessing executive function, learning and memory, and working memory and pattern separation in rodents (Horner et al., 2013; Mar et al., 2013; Oomen et al., 2013).

Most recently, the group has developed https://touchscreencognition.org/ which is a place for user forums, discussion, training information, etc. The group is actively doing live training sessions as well for anyone interested in using touchscreens in their tasks. Their twitter account, @TouchScreenCog, highlights recent trainings as well. Through developing automated tests for specific behaviors, this data can be extrapolated across labs and tasks.


MouseBytes:

Additionally, MouseBytes is an open-access database where scientists can upload their data to, or can analyze other data already collected from another group. Not only does this reduce redundancy of experiments, but also allows for transparency and reproducibility for the community. The site also performs data comparison and interactive data visualization for any data uploaded onto the site. There are also guidelines and video tutorials on the site as well.


Nature Protocols Tutorials:

Horner, A. E., Heath, C. J., Hvoslef-Eide, M., Kent, B. A., Kim, C. H., Nilsson, S. R., … & Bussey, T. J. (2013). The touchscreen operant platform for testing learning and memory in rats and mice. Nature protocols, 8(10), 1961.

Mar, A. C., Horner, A. E., Nilsson, S. R., Alsiö, J., Kent, B. A., Kim, C. H., … & Bussey, T. J. (2013). The touchscreen operant platform for assessing executive function in rats and mice. Nature protocols, 8(10), 1985.

Oomen, C. A., Hvoslef-Eide, M., Heath, C. J., Mar, A. C., Horner, A. E., Bussey, T. J., & Saksida, L. M. (2013). The touchscreen operant platform for testing working memory and pattern separation in rats and mice. Nature protocols, 8(10), 2006.

Original Touchscreen Articles:

Bussey, T. J., Muir, J. L., & Robbins, T. W. (1994). A novel automated touchscreen procedure for assessing learning in the rat using computer graphic stimuli. Neuroscience Research Communications, 15(2), 103-110.

Bussey, T. J., Padain, T. L., Skillings, E. A., Winters, B. D., Morton, A. J., & Saksida, L. M. (2008). The touchscreen cognitive testing method for rodents: how to get the best out of your rat. Learning & memory, 15(7), 516-523.

 

You can buy the touchscreens here.

 

Editor’s Note: We understand that Nature Protocols is not an open-access journal and that the touchscreens must be purchased from a commercial company and are not technically open-source. However, we appreciate the group’s ongoing effort to streamline data across labs, to put on training workshops, and to provide an open-access data repository for this type of data.

The OpenMV project: Machine Vision with Python

November 14, 2019

OpenMV – Better, Stronger, Faster and only $65 USD


Recent updates to the firmware for the OpenMV H7 Camera have brought some new functionality to this device, which is popular for open source neuroscience projects (e.g. Rodent Arena Tracker, or RAT: https://hackaday.io/project/162481-rodent-arena-tracker-rat). The new firmware allows for use of the popular TensorFlow library for machine learning on this MicroPython-based device. It’s small (1.5 by 1.75 inches), consumes only a max of 140 mA when processing data, has 1 MB of RAM and 2 MB of flash, and runs 64-bits computations at 4800 MHz (3.84 GB/s bandwidth). OpenMV is capable of frame differencing, color tracking, marker tracking, face detection, eye tracking, person detection (with TensorFlow Lite), and more. The project supports a very easy to use GUI, the OpenMV IDE. It’s intuitive to use, and offers a number of ready to go applications. Arduino users will feel right at home, despite the code being Python based.

Check out the project here: https://openmv.io/.