Category: Other

Autopilot

DECEMBER 12, 2019

Jonny Saunders from Michael Wehr’s lab at the University of Oregon recently posted a preprint documenting their project Autopilot, which is a python framework for running behavioral experiments:


Autopilot is a python framework for behavioral experiments through utilizing Raspberry Pi microcontrollers. Autopilot incorporates all aspects of an experiment, including the hardware, stimuli, behavioral task paradigm, data management, data visualization, and a user interface. The authors propose that Autopilot is the fastest, least expensive, most flexibile behavioral system that is currently available.

The benefit of using Autopilot is that it allows more experimental flexibility, which lets researchers to optimize it for their specific experimental needs. Additionally, this project exemplifies how useful a raspberry pi can be for performing experiments and recording data. The preprint discusses many benefits of raspberry pis, including their speed, precision and proper data logging, and they only cost $35 (!!). Ultimately, the authors developed Autopilot in an effort to encourage users to write reusable, portable experiments that is put into a public central library to push replication and reproducibility.

 

For more information, check out their presentation or the Autopilot website here.

Additionally documentation is here, along with a github repo, and a link to their preprint is here.


3D Printed Headcap and Microdrive

SEPTEMBER 26, 2019

In their 2015 Journal of Neurophysiology article, the Paré Lab at the Center for Molecular and Behavioral Neuroscience at Rutgers University describe their novel head-cap and microdrive design for chronic multi-electrode recordings in rats through the use of 3D printing technology and highlight the impact of 3D printing technology on neurophysiology:


There is a need for microdrives and head-caps that can accommodate different recording configurations. Many investigators implant multiple individual drives aiming to record from numerous areas. However, this extends surgery time, impairs animal recovery, and complicates experiments. Other strategies rely on more expensive custom-machined drive assemblies that are specifically built for a particular set of regions, limiting their adaptability. Some proposed designs allow targeting of multiple regions, but recording sites must be within a few millimeters so are only suitable for mice and not for accessing areas of larger brains (like in rats, for example).

Utilizing 3D printing technology to create a novel design concept of microdrives and head-caps, this group’s design allows for recording of multiple brain regions in different configurations. In their article, the lab reviews the basic principles of 3D design and printing and introduce their approach to multisite recording, explaining how to construct the individual required components. The 3D printed head cap and electrode microdrive enables investigators to perform chronic multi-site recordings in rats. The head cap is composed of five components and there are three types of microdrives that can be used in different combinations or positions to study different targets. The different microdrive designs have different functionality including for extended driving depths, targeting of thin layers, and allowing many microdrives to be placed in a small area.

To show the viability of their new designs, the lab presents LFP recordings obtained throughout the cortico-hippocampal loop using 3D printed components. The lab suggests investigators modify their designs to best suit their research needs and give changeable versions of the three parts most important in modification. The investigators also provide a detailed explanation of the printing, assembly, and implantation of the head caps and microdrives. Finally, they indicate the ways 3D printing advancements can change how chronic implants are designed and used, notably 3D scanning and new material development.

For more information on the microdrive and headcap, see their paper’s Appendix, which has full instructions and advice on building these devices.


Headley, D. B., DeLucca, M. V., Haufler, D., & Paré, D. (2015). Incorporating 3D-printing technology in the design of head-caps and electrode drives for recording neurons in multiple brain regions. Journal of Neurophysiology, 113(7), 2721–2732. https://doi.org/10.1152/jn.00955.2014

SignalBuddy

SEPTEMBER 19, 2019

Richard Warren, a graduate student in the Sawtell lab at Columbia University, recently shared his new open-source project called SignalBuddy:


SignalBuddy is an easy-to-make, easy-to-use signal generator for scientific applications. Making friends is hard, but making SignalBuddy is easy. All you need is an Arduino Uno! SignalBuddy replaces more complicated and (much) more expensive signal generators in laboratory settings where one millisecond resolution is sufficient. SignalBuddy generates digital or true analog signals (sine waves, step functions, and pulse trains), can be controlled with an intuitive serial monitor interface, and looks fabulous in an optional 3D printed enclosure.

To get SignalBuddy working, all you need to do is install the SignalBuddy.ino Arduino code provided on their github, and follow the step-by-step instructions on github to get the Arduino programmed up for your specific experimental needs. SignalBuddy can be used for numerous lab purposes, including creating pulse trains for optogenetic light stimulation, microstimulation, electrophysiology, or for programming up stimuli for behavioral paradigms.

Additionally, their hackaday site provides the instructions for 3D printing an enclosure to house the Arduino inside using just two .stl files.


For more information, check out the SignalBuddy github repository here.

You can also get further details on the SignalBuddy Hackaday.io page here.

 

Fun Fact: This group also developed KineMouse Wheel, a project previously posted on OpenBehavior and is now being used in numerous labs! Cheers to another great open-source project from Richard Warren and the Sawtell lab!

eNeuro’s “Open Source Tools and Methods” paper topic

SEPTEMBER 12, 2019

There’s a new place to publish your open-source tools or methods in neuroscience! Christophe Bernard, Editor-in-Chief at the journal eNeuro (an open-access journal of the Society for Neuroscience), recently wrote an editorial detailing the opening of a new topic tract in eNeuro for Open Source Tools and Methods. In his editorial, Bernard details how there has been a recent push for open-source science, and highlights how there are many new open-source projects being developed in neuroscience that need a proper home for publication. While eNeuro has a “Methods/New Tools” submission type already, Bernard says the “Open Source Tools and Methods” submission is available for projects like “low-cost devices to measure animal behavior, a new biophysical model of a single neuron, a better method to realign images when performing in vivo two-photon imaging, scripts and codes to analyze signals” and more.

There is no current publication venue explicitly intended for open-source tools and methods in neuroscience, and through the addition of this article type, new tools/methods/devices/projects can be published in a straightforward manner. By including this publication type, it will aid the neuroscience field in replication, reproducibility, and transparency of methods and tools used. A major point from Bernard is that this may help the developers of the tool or method, since “it allows for acknowledgment of those who developed such tools and methods fully, often rotating students or engineers recruited on a short-duration contract. On a standard research paper, their name ends up in the middle of the list of authors, but the Open Source Tools and Methods type will allow them to be the first author.”

The details for submission of an open source tool or method on the eNeuro site is as follows: “Open Source Tools and Methods are brief reports (limited to 4500 words) describing the creation and use of open-source tools in neuroscience research. Examples of tools include hardware designs used in behavioral or physiological studies and software used for data acquisition and analysis. They must contain a critique of the importance of the tool, how it compares to existing open- and closed-source solutions, and a demonstration of tool use in a neuroscience experiment.”

 

Cheers to you, eNeuro, for your inclusion of open-source projects to help advance the neuroscience field!


Link to the editorial: https://www.eneuro.org/content/6/5/ENEURO.0342-19.2019

Current articles for Open Source Tools and Methods are listed here.

To submit an article under Open Source Tools and Methods, check out the instructions for authors at eNeuro here.

Curated Itinerary on Open-Source Tools at SfN-19

September 5, 2019

OpenBehavior is now an official part of the SfN team for curated itineraries at SfN-19! This year, we will provide an itinerary on Open-Source Tools. Linda Amarante (@L_Amarante) and Samantha White (@samantha6rose) are working on the itinerary now. If you would like your presentation to be included, please DM us through our Twitter account (@OpenBehavior) or send an email message about your presentation to openbehavior@gmail.com before noon on Saturday, September 8. Thanks!

Ratcave

AUGUST 29, 2019

Nicholas A. Del Grosso and Anton Sirota at the Bernstein Centre for Computational Neuroscience recently published their new project called Ratcave, a Python 3D graphics library that allows researchers to create and 3D stimuli in their experiments:


Neuroscience experiments often require the use of software to present stimuli to a subject and subsequently record their responses. Many current libraries lack 3D graphic support necessary for psychophysics experiments. While python and other programming languages may have 3D graphics libraries, it is hard to integrate these into psychophysics libraries without modification. In order to increase programming of 3D graphics suitable for the existing environment of Python software, the authors developed Ratcave.

Ratcave is an open-source, cross-platform Python library that adds 3D stimulus support to all OpenGL-based 2D Python stimulus libraries. These libraries include VisionEgg, Psychopy, Pyglet, and PyGam. Ratcave comes with resources including basic 3D object primitives and wide range of 3D light effects. Ratcave’s intuitive object-oriented interface allows for all objects, which include meshes, lights, and cameras, can be repositioned, rotated, and scaled. Objects can also be parented to one another to specify complex relationships of objects. By sending the data as a single array using OpenGL’s VAO (Vertex Array Object) functionality, the processing of drawing much more efficient. This approach allows over 30,000 vertices to be rendered at a performance level surpassing the needs of most behavioral research studies.

An advantage of Ratcave is that it allows researchers to continue to use their preferred libraries, since Ratcave supplements existing python stimulus libraries, making it easy to add on 3d stimuli to current libraries. The manuscript also reports that Ratcave has been tested and implemented in other’s research, actively showing reproducibility across labs and experiments.

Details on the hardware and software can be found at https://github.com/ratcave/ratcave.

Information on Ratcave can also be found on the https://ratcave.readthedocs.org.


The Future is Open

August 16, 2019

This week’s post is about the current state of OpenBehavior (OB) and ongoing efforts within the open source neuroscience community. Next week, we will resume posting about new tools.

Samantha White, Linda Amarante, Lex Kravitz, and Mark Laubach published a commentary in eNeuro last week about how open-source tools are being used in neuroscience. We reported on our experiences in running OB since the summer of 2016, the many wonderful projects that we have posted about over the past three years, two surveys that we conducted on our site and open source tool use in general, and some observations on the mindset that comes from making and using open source tools. A link to our paper is https://www.eneuro.org/content/6/4/ENEURO.0223-19.2019.

The timing of our commentary and the related social media attention that is generated (e.g. https://twitter.com/samantha6rose/status/1159913815393341440) was especially nice as we have been working to expand OB to better serve the research community, and hope to find external support for the project. We would like to address an outstanding problem: it is not currently possible to systematically track the development and use of open source hardware and software in neuroscience research. To address this issue, we would like to to create a database of existing open source projects, characterize them using a newly developed “taxonomy” based on their functions (video analysis, behavioral control system, hardware for measuring or controlling behavior), and register projects using the SciCrunch RRID registry.

If you haven’t heard of SciCrunch, you should check it out: https://scicrunch.org/. Its an awesome project that tracks usage of research tools such as antibodies. RRIDs are citable and, if developed for open source hardware and software, would allow for developers to track how their tools are used in neuroscience publications. This might help provide incentives for sharing and metrics (RRIDs) on tool use and publication.

We are also planning to work with the Society for Neuroscience (SfN) to increase public awareness of neuroscience research by participating in SfN-sponsored advocacy and outreach events, facilitating discussions of open source tools through a new discussion topic in the Neuronline forums (more news on that soon), and continuing to provide curated itineraries on open source tools for attendees of the annual SfN meeting.

 

3D Printed Headstage Implant

June 6, 2019

Richard Pinnell from Ulrich Hofmann’s lab has three publications centered around open-source and 3D printed methods for headstage implant protection and portable / waterproof DBS and EEG to pair with water maze activity. We share details on the three studies below:


Most researchers opt to single-house rodents after rodents have undergone surgery. This helps the wound heal and prevent any issues with damage to the implant. However, there is substantial benefits to socially-housing rodents, as social isolation can create stressors for them. As a way to continue to socially-house rats, Pinnell et al. (2016a) created a novel 3D-printed headstage socket to surround an electrode connector. Rats were able to successfully be pair housed with these implants and their protective caps.

The polyamide headcap socket itself is 3D printed, and a stainless steel thimble can be screwed into it. The thimble can be removed by being unscrewed to reveal the electrode connector. This implant allows both for increased well-being of the rodent post-surgery, but also has additional benefits in that it can prevent any damage to the electrode implant during experiments and keeps the electrode implant clean as well.

The 3D printed headcap was used in a second study (Pinnell et al., 2016b) for wireless EEG recording in rats during a water maze task. The headstage socket housed the PCB electrode connector and the waterproof wireless system was attached. In this setup, during normal housing conditions, this waterproof attachment was replaced with a standard 18×9 mm stainless-steel sewing thimble, which contained 1.2 mm holes drilled at either end for attachment to the headstage socket. A PCB connector was manufactured to fit inside the socket, and contains an 18-pin zif connector, two DIP connectors, and an 18-pin Omnetics electrode connector for providing an interface between the implanted electrodes and the wireless recording system.

Finally, the implant was utilized in a third study (Pinnell et al., 2018) where the same group created a miniaturized, programmable deep-brain stimulator for use in a water maze. A portable deep brain stimulation (DBS) device was created through using a PCB design, and this was paired with the 3D printed device. The 3D printed headcap was modified from its use in Pinnell et al., 2016a to completely cover the implant and protect the PCB. The device, its battery, and housing weighs 2.7 g, and offers protection from both the environment and from other rats, and can be used in DBS studies during behavior in a water maze.

The portable stimulator, 3D printed cap .stl files, and more files from the publications can be found on https://figshare.com/s/31122e0263c47fa5dabd.


Pinnell, R. C., Almajidy, R. K., & Hofmann, U. G. (2016a). Versatile 3D-printed headstage implant for group housing of rodents. Journal of neuroscience methods, 257, 134-138.

Pinnell, R. C., Almajidy, R. K., Kirch, R. D., Cassel, J. C., & Hofmann, U. G. (2016b). A wireless EEG recording method for rat use inside the water maze. PloS one, 11(2), e0147730.

AutonoMouse

May 10, 2019

In a recently published article (Erskine et al., 2019), The Schaefer lab at the Francis Crick Institute introduced their new open-source project called AutonoMouse.


AutonoMouse is a fully automated, high-throughput system for self-initiated conditioning and behavior tracking in mice. Many aspects of behavior can be analyzed through having rodents perform in operant conditioning tasks. However, in operant experiments, many variables can potentially alter or confound results (experimenter presence, picking up and handling animals, altered physiological states through water restriction, and the issue that rodents often need to be individually housed to keep track of their individual performances). This was the main motivation for the authors to investigate a way to completely automate operant conditioning. The authors developed AutonoMouse as a fully automated system that can track large numbers (over 25) of socially-housed mice through implanted RFID chips on mice. With the RFID trackers and other analyses, the behavior of mice can be tracked as they train and are subsequently tested on (or self-initiate testing in) an odor discrimination task over months with thousands of trials performed every day. The novelty in this study is the fully automated nature or the entire system (training, experiments, water delivery, weighing the animals are all automated) and the ability to keep mice socially-housed 24/7, all while still training them and tracking their performance in an olfactory operant conditioning task. The modular set-up makes it possible for AutonoMouse to be used to study many other sensory modalities, such as visual stimuli or in decision-making tasks. The authors provide a components list, layouts, construction drawings, and step-by-step instructions for the construction and use of AutonoMouse in their publication and on their project’s github.


For more details, check out this youtube clip interview with Andreas Schaefer, PI on the project.

 

The github for the project’s control software is located here: https://github.com/RoboDoig/autonomouse-control and for the project’s design and hardware instructions is here: https://github.com/RoboDoig/autonomouse-design. The schedule generation program is located here: https://github.com/RoboDoig/schedule-generator


Craniobot

March 13, 2019

Suhasa Kodandaramaiah from the University of Minnesota, Twin Cities, has shared the following about Craniobot, a computer numerical controlled robot for cranial microsurgeries.


The palette of tools available for neuroscientists to measure and manipulate the brain during behavioral experiments has greatly expanded in the previous decade. In many cases, using these tools requires removing sections of the skull to access the brain. The procedure to remove the sub-millimeter thick mouse skull precisely without damaging the underlying brain can be technically challenging and often takes significant skill and practice. This presents a potential obstacle for neuroscience labs wishing to adopt these technologies in their research. To overcome this challenge, a team at the University of Minnesota led by Mathew Rynes and Leila Ghanbari (equal contribution) created the ‘Craniobot,’ a cranial microsurgery platform that combines automated skull surface profiling with a computer numerical controlled (CNC) milling machine to perform a variety of cranial microsurgical procedures on mice. The Craniobot can be built from off-the-shelf components for a little over $1000 and the team has demonstrated its capability to perform small to large craniotomies, skull thinning procedures and for drilling pilot holes for installing bone anchor screws.

Read more about the Craniobot here. Software package for controlling the craniobot can be found on Github.