Home » Behavioral Apparatus

Category: Behavioral Apparatus

Open Source Joystick

March 26, 2020

This week we want to talk about joy! I mean, joy-sticks. Parley Belsey, Mark Nicholas and Eric Yttri have developed and shared an open-source joystick for studying motor behavior and decision making in mice!


Mice are hopping and popping in research, and so researchers are using more creativity and innovation to understand the finite aspects of their behaviors. Recently, members of the Yttri lab at Carnegie Mellon used their skills to create an open source joystick for studying mouse motor and decision making behaviors! In their paper they describe the full behavioral set up (based on the RIVETS design from the Dudman lab), featuring a removable head fixation point, a sipping tube, and a joystick to measure reach trajectory, amplitude, speed, etc. Data is collected and devices are controlled via an Arduino, solenoid circuit, microSD card reader, and LCD readout, and data can be analyzed in real time or saved to a csv for analysis later. The Arduino can be programmed to signal reward delivery when a correct response is recorded from the joystick which streamlines outcome based reward delivery. Belsey et al. tested their device with adult mice, and the results of training can be found in the paper as well as the full build instructions and ideas for how their tool may be of interest to build and use in your lab.

For more, check out their publication or Github!


Belsey, P., Nicholas, M. A., & Yttri, E. A. (2020). Open-source joystick manipulandum for decision-making, reaching, and motor control studies in mice. Eneuro. doi: 10.1523/eneuro.0523-19.2020

Robotic Flower System for Bee Behavior

March 19, 2020

Erno Kuusela and Juho Lämsä, from the University of Oulu in Finland, have shared their design for an open source, computer controlled robotic flower system for studying bumble bee behavior.


Oh.. to be a honey bee.. collecting nectar from a robotic flower.. of open source design… splendid. As with behavioral studies from species common to neuroscience (rodents to drosophila to humans or zebrafish, etc), data collection for behavioral studies in bees can be time-consuming and sensitive to human error. Thanks to the growth in the open source movement, it’s easier than ever to develop hardware and software to automate such studies, which is what Kuusela and Lämsä have demonstrated in their publication. They developed a system of robotic flowers to study bee behavior. Their design features a control unit, based on an Arduino Mega 2560, which can collect data from and send inputs to up to 32 individual robotic flowers. Each flower contains its own servo controlled refill system. The nectar cup (in this design, a phillips screw head that can hold 1.7 uL!) is attached to servomotor’s shaft via a servo horn which, when prompted by the program, dips the cup into the flower’s individual nectar reservoir. The flower is designed in a way to capture data when an animal is feeding by the placement of IR beams that are broken when engaged on the flower’s feeding mechanism and sends data to the control unit. A covering on the system can be marked with symbols to attract bees. Custom control software is available on an open source license to be used as is, or modified to fit an experimenter’s needs. While developed and tested with bumble bees, the system can also be adapted for a number of species.

Read more about specifics of this system in Kuusela & Lämsä (2016). The circuit diagrams, parts list, and control software and source code are available in the paper’s supplemental information.


ToneBox: a system for automated auditory operant conditioning

March 12, 2020

Nikolas Francis and colleagues from the Kanold Laboratory at the University of Maryland – College Park have developed and shared ToneBox, a system for high-throughput operant conditioning in a mouse homecage.


Data collection for operant conditioning studies can be time-consuming and sensitive to variability in experimental conditions. An automated approach would both reduce experimental variability and allow for high-throughput training and data collection. To solve this, Francis et al. developed and shares a user-friendly automated system for training up to hundreds of mice in an auditory behavioral task. This system features a custom Matlab GUI running on a desktop which connects to a Raspberry Pi to administer auditory stimuli from a USB soundcard and collects data auditory data as well as licking data from a water spout with commercially available capacitive sensors. The parts list and build instructions are readily available from the paper and associated GitHub repository and are straight-forward enough for new users to build and operate without extensive coding or design experience. This system can collect data 24/7 for weeks on end, which allows for insight into behaviors not otherwise observable by experimenters who need to eat and sleep instead of observing mice at all hours.

The developers show how their tool can be used by training 24 C57BL/6J mice to perform auditory behavioral tasks. They report their results and find that circadian rhythms modulated overall behavioral activity as expected for nocturnal animals. Details about this study, as well as about building the device and where to find resources to create your very own ToneBox can be gleaned from reading their full paper on eNeuro.

Read the paper here, check out the GitHub repository here, or keep up to date with the latest from the Kanold Lab by visiting their site!


FED Development Interview

March 5, 2020

One of the co-founders of OpenBehavior, Lex Kravitz recently participated in an interview about the development of the Feeding Experiment Device, lovingly known as FED! Read more to learn about how the project started, how the device has improved and what’s next for the project, including a partnership with the OpenEphys team. Perhaps this will inspire and guide you through your own project development!


1. Tell us about FED: What is it intended to do? What was its initial purpose for use in your lab?
FED is a home-cage compatible device for training mice!  FED stands for the Feeding Experimentation Device, the name is also a pun: it was invented while my lab was at the NIH, so the FED was made by the Feds 🙂 
The initial purpose of FED was to measure feeding, and quantify daily calories eaten by mice living in home cages. Our lab studies feeding and obesity and the automated methods that exist for measuring food intake (typically small weighing scales that attach to the cage) were too expensive for our lab to purchase enough of them for our studies, which often involved >50 mice at a time.  We figured other people might have the same need so we decided to invent something to do this.  We settled on a pellet dispensing approach because we can buy pellets in precision sizes (20mg each), which means that instead of weighing the food we can calculate the calories from the number of pellets eaten.  From an engineering standpoint counting pellets seemed easier than scales too.  I was very fortunate to have a postbac Katrina Nguyen in my lab with a background in BioMedical Engineering, and she spearheaded the development of our first device, which we called FED.  After she left the lab another postbac, Mohamed Ali, worked with me to create the FED2, which was a slightly refined version of FED, and FED2 eventually evolved into the FED3 model that we’re working with today.

 

2. Can you briefly describe the components of the device, what it is made of, how long it takes to make one?

FED is a “smart” pellet dispenser.  At its core it’s a 3D printed pellet dispenser, which also has a microcontroller board inside, the Adafruit M0 Adalogger, which is an extremely capable microcontroller.  In addition to dispensing pellets, FED3 has two “nosepokes” for mice to interact with it, 8 multicolored LEDs, an audio generator, a screen for user feedback, and a programmable output for synchronizing FED3 behavior with other techniques like optogenetics or fiber photometry. FED3 is small enough to fit inside of rodent home cages and does not require a connected computer to operate.  FED3 is designed to simplify rodent training, it ships with 12 built-in programs, including fixed-ratio and progressive-ratio operant training routines, as well as optogenetic self-stimulation programs. Most importantly, FED3 is open-source and it can be hacked and re-programmed by users to achieve new functionality. It takes me about 1-2 hours to put one together from scratch, it would probably take a new user 3-4 hours.  The electronics parts cost  about $135, and the Open Ephys Production Site is selling assembled electronics for a small markup.  With the electronics and printed parts in hand it takes about 15 minutes to assemble.

3. Since initial development of the device, what general improvements have been made?
Since Katrina Nguyen made the first FED, we worked a lot to improve reliability of the device.  We envisioned this being an “always on” device that can sit in a cage and measure food intake around the clock.  It holds about 10 days worth of pellets and the battery lasts about a week, so in theory this is possible. The major challenge, however, is that it is a mechanical pellet dispenser that is prone to issues such as jamming or dispensing errors. In practice we clean and test the FEDs each day when we run multi-day studies.
One issue with the first FED was that we had no way to get user feedback, so if it jammed the only way we’d even know was if we opened the cage and tried to get a pellet.  For this reason we put a screen on FED2, so we could see how many pellets were dispensed and also get an error message when it was jammed.  For FED3 we added more functions, such as the two “nosepokes” for the mouse to interact with FED to run full home-cage operant tasks.  Finally, we added a synchronized BNC output connector, which let’s us sync the nosepoke and pellet data with ephys or fiber photometry. Several groups we know are doing this to synchronize fiber photometry recordings with pellet retrieval, or generate pulses for optogenetic self-stimulation.

4. Switching gears from the actual device to production and replication, what have you seen as necessary steps to getting other labs / researchers successfully using your device (documentation, forums, contact email, etc)?
In terms of my own work to get FED3 into other people’s hands, I think the most important thing was to get other people to start using it.  So I’ve tried to keep our online documentation for FED3 complete and up to date.  I also give a lot of them out in exchange for feedback – basically it’s important for other people to try building and using a device to discover its flaws that you might not realize because you’ve worked on it too long.  I don’t wait until the devices were perfect before I started giving give away so I could find out what worked and what didn’t for new users. 

5. Since updating and producing FED3, you’ve decided to connect with OpenEphys production team to sell the device on their website. Why did you decide to now sell the device and have OpenEphys distribute it? What are the advantages (to you as the developer, to OpenEphys, and to the potential users) of deciding to put the device on OpenEphys for production?
It is very challenging to distribute a hardware device as a researcher – there aren’t good revenue streams for funding development of devices like this and there aren’t good logistics for distributing them through a University setting.  It also takes a lot of time to communicate with people, mail out devices, and provide support.  Really, all of this are better suited to a company, and for several reasons I don’t want to start one. 
Therefore, I jumped at the chance to work with the Open Ephys Production Site to take on these distribution challenges!   You can see their FED3 sales page here.  Working with them has been really fun and productive – I mailed them a FED3 and they first did a thorough evaluation of the electronics and found several things they could improve in my design to make it more robust and better for manufacture.  So we worked together to implement and test those improvements. The main advantage for me is that working with them amplifies the effort our team did on this device.  I would love to see cheap and easy operant training available in every lab.  I also support their broader mission of distributing open source tools in neuroscience.  I’ll take this moment to make a brief (no) conflict of interest statement, I’m not receiving any funding back from their sales of FED3, the proceeds are all going to support their efforts to distribute FED3 and other open source hardware.

6. Describe the connection between you as a developer and OpenEphys production. Can / should others try to “pitch” their device to OpenEphys (or to other companies out there for production)?
Working with OEPS was very collaborative.  We started by evaluating whether the FED3 was something they wanted to sell, and they quickly determined it was. Filipe Carvalho then went through the design files and made some recommendations for things he wanted to change to improve manufacturability.  For example, our version relied heavily on Adafruit breakout boards, whereas he engineered the relevant chips onto the FED3 printed circuit board assembly.  This makes FED3 easier to manufacture and more reliable.  We worked together on all changes, and it was a really productive collaboration.  I would definitely suggest others “pitch” their ideas to OEPS!  Even if it doesn’t work out for OEPS to distribute the idea I’m sure they’ll get good feedback and learn something about manufacturing and distributing open source hardware.

7. What general advice or instructions do you have for other researchers looking to develop their devices in a similar way. Please let us know of anything you’d like to share with the open-source neuroscience world.
I would advise people to share their designs early and often.  Don’t wait until the design is perfect (it never will be)!  The easier you make it for others to use your device the more of a chance they’ll see the value in it that made you create it in the first place. Try to document online as you go, ie: by using github.com or hackaday.io to keep a record of changes you make and validation experiments you perform.  It can be difficult to go back and create documentation after the fact, but if you do it as you go you have a built-in record for what you did, and why you made the design decisions you made. If you keep up documentation on the way it can also turn into something that you can publish as a methods paper when it’s ready for that.  So in summary – share early and share often, and document your work!

Camera Control

February 6, 2020

The Adaptive Motor Control Lab at Harvard recently posted their project, Camera Control, a python based camera software GUI, to Github.


Camera Control is an open-source software package written by postdoctoral fellow Gary Kane that allows video to be recorded in sync with behavior. The python GUI and scripts allows investigators to record from multiple imaging source camera feeds with associated timestamps for each frame. When used in combination with a NIDAQ card, timestamps from a behavioral task can also be recorded on the falling edge of a TTL signal. This allows video analysis to be paired with physiological recording which can be beneficial in assessing behavioral results. This package requires Windows 10, Anaconda, and Git, and is compatible with Imaging Source USB3 cameras. The software package is accessible for download from the lab’s github and instructions for installation and video recording are provided.

Find more on Github.


Kane, G. & Mathis, M. (2019). Camera Control: record video and system timestamps from Imaging Source USB3 cameras. GitHub. https://zenodo.org/badge/latestdoi/200101590

Autopilot

DECEMBER 12, 2019

Jonny Saunders from Michael Wehr’s lab at the University of Oregon recently posted a preprint documenting their project Autopilot, which is a python framework for running behavioral experiments:


Autopilot is a python framework for behavioral experiments through utilizing Raspberry Pi microcontrollers. Autopilot incorporates all aspects of an experiment, including the hardware, stimuli, behavioral task paradigm, data management, data visualization, and a user interface. The authors propose that Autopilot is the fastest, least expensive, most flexibile behavioral system that is currently available.

The benefit of using Autopilot is that it allows more experimental flexibility, which lets researchers to optimize it for their specific experimental needs. Additionally, this project exemplifies how useful a raspberry pi can be for performing experiments and recording data. The preprint discusses many benefits of raspberry pis, including their speed, precision and proper data logging, and they only cost $35 (!!). Ultimately, the authors developed Autopilot in an effort to encourage users to write reusable, portable experiments that is put into a public central library to push replication and reproducibility.

 

For more information, check out their presentation or the Autopilot website here.

Additionally documentation is here, along with a github repo, and a link to their preprint is here.


Touchscreen Cognition and MouseBytes

NOVEMBER 21, 2019

Tim Bussey and Lisa Saksida from Western University and the BrainsCAN group developed touchscreen device chambers that can be used to measure rodent behavior. While the touchscreens themselves are not an open-source device, we appreciate the open-science push for creating a user community, performing workshops and tutorials, and data sharing. Most notably, their sister project, MouseBytes, is an open-access database for all cognitive data collected from the touchscreen-related tasks:


Touchscreen History:

In efforts to develop a cognitive testing method for rodents that would optimally reflect a touchscreen testing method in humans, Bussey et al., (1994, 1997a,b) developed a touchscreen apparatus for rats, which was subsequently adapted for mice as well. In short, the touchscreens allow for computer-aided graphics to be presented to a rodent and the rodent can make choices in a task based on which stimuli appear. The group published a “tutorial” paper detailing the behavior and proper training methods to get rats to perform optimally using these devices (Bussey et al., 2008). Additionally, in 2013, three separate Nature Protocols articles were published by this group, with details on how to use the touchscreens in tasks assessing executive function, learning and memory, and working memory and pattern separation in rodents (Horner et al., 2013; Mar et al., 2013; Oomen et al., 2013).

Most recently, the group has developed https://touchscreencognition.org/ which is a place for user forums, discussion, training information, etc. The group is actively doing live training sessions as well for anyone interested in using touchscreens in their tasks. Their twitter account, @TouchScreenCog, highlights recent trainings as well. Through developing automated tests for specific behaviors, this data can be extrapolated across labs and tasks.


MouseBytes:

Additionally, MouseBytes is an open-access database where scientists can upload their data to, or can analyze other data already collected from another group. Not only does this reduce redundancy of experiments, but also allows for transparency and reproducibility for the community. The site also performs data comparison and interactive data visualization for any data uploaded onto the site. There are also guidelines and video tutorials on the site as well.


Nature Protocols Tutorials:

Horner, A. E., Heath, C. J., Hvoslef-Eide, M., Kent, B. A., Kim, C. H., Nilsson, S. R., … & Bussey, T. J. (2013). The touchscreen operant platform for testing learning and memory in rats and mice. Nature protocols, 8(10), 1961.

Mar, A. C., Horner, A. E., Nilsson, S. R., Alsiö, J., Kent, B. A., Kim, C. H., … & Bussey, T. J. (2013). The touchscreen operant platform for assessing executive function in rats and mice. Nature protocols, 8(10), 1985.

Oomen, C. A., Hvoslef-Eide, M., Heath, C. J., Mar, A. C., Horner, A. E., Bussey, T. J., & Saksida, L. M. (2013). The touchscreen operant platform for testing working memory and pattern separation in rats and mice. Nature protocols, 8(10), 2006.

Original Touchscreen Articles:

Bussey, T. J., Muir, J. L., & Robbins, T. W. (1994). A novel automated touchscreen procedure for assessing learning in the rat using computer graphic stimuli. Neuroscience Research Communications, 15(2), 103-110.

Bussey, T. J., Padain, T. L., Skillings, E. A., Winters, B. D., Morton, A. J., & Saksida, L. M. (2008). The touchscreen cognitive testing method for rodents: how to get the best out of your rat. Learning & memory, 15(7), 516-523.

 

You can buy the touchscreens here.

 

Editor’s Note: We understand that Nature Protocols is not an open-access journal and that the touchscreens must be purchased from a commercial company and are not technically open-source. However, we appreciate the group’s ongoing effort to streamline data across labs, to put on training workshops, and to provide an open-access data repository for this type of data.

An automated behavioral box to assess forelimb function in rats

November 7, 2019

Chelsea C. Wong and colleagues at the University of California – San Francisco have developed and shared a design for an open-source behavioral chamber for the measurement of forelimb function in rats.


Forelimb function (reaching, grasping, retrieving, etc) is a common readout of behaviors for studying neural correlates of motor learning, neural plasticity and recovery from injury. One version of the task used commonly to study these behaviors, the Whishaw single-pellet reach-to-grasp task, traditionally requires an experimenter to manually present each pellet and to shape the behavior rats by placing a subsequent pellet only when the rat has relocated to the other end of the cage over multiple trials. Wong et al. developed an open source, low-cost, automated high-throughput version of this task. The behavioral apparatus, constructed out of commercially available acrylic sheets, features a custom built pellet dispenser, cameras and IR detectors for measuring position of a rat and position of a pellet, and an Arduino board to integrate information about the animal with dispensing of the pellet. Code for automation of the task was built in MATLAB and includes a GUI for altering experiment parameters. Data collected can be analyzed using MATLAB, excel, or most other statistical programming languages. The authors provide example data from the device to highlight its potential use for combining this reaching task with chronic electrophysiological recording techniques. The full design is available in their publication in Journal of Neuroscience Methods.

Check out the full publication here!


Wong, C. C., Ramanathan, D. S., Gulati, T., Won, S. J., & Ganguly, K. (2015). An automated behavioral box to assess forelimb function in rats. Journal of Neuroscience Methods, 246, 30–37. doi: 10.1016/j.jneumeth.2015.03.008

Automated Home-Cage Rodent Two-bottle Choice Test: open-source success story

October 31, 2019

Elizabeth Godynyuk and colleagues from the Creed Lab at Washington University, St. Louis recently published their design for a two-bottle choice homecage apparatus in eNeuro. It incorporates the original design (published on Hackaday.io in May 2018), modifications from Jude Frie and Jibran Khokar (Frie & Khokhar, 2019), and additional improvements over the course of use. This project is a great example of collaborative open-source tool development.


Studies of liquid ingestive behaviors are used in neuroscience to investigate reward-related behavior, metabolism, and circadian biology. Accurate measurement of these behaviors are needed when studying drug administration, preference between two substances, and measuring caloric intake. To measure consummatory behavior in mice between two liquids, members of the Creed lab designed a low-cost and arduino-based device to automatically measure consumption in a homecage two-bottle choice test. Posted to Hackaday in May 2018, the initial version of the device used photointerrupters to measure time at the sipper, 15 mL conical tubes for volumetric measurements of fluid, and a 3D printed holder for the apparatus. Data from the photobeams are recorded to an SD card using a standard Arduino. In August 2018, the project was updated to Version 2, to make it battery powered and include a screen to display data. They made the editable TinkerCAD design available on hackaday.io.

In October 2018, Dr. Jibran Khokhar and colleagues at the University of Guelph posted a project log highlighting the modifications making the device larger and suitable for studying liquid intake in rats. This updated design was published in April 2019 in HardwareX. This device gives the advantage of being able to analyze the drinking microstructure by recording licking behavior and volume consumed in real time. Modifications include larger liquid reservoirs and adding a hydrostatic depth sensor, allowing each bout of drinking to correspond to a specific change in volume.

In current day, Elizabeth Godynyuk and colleagues from the Creed lab have shared their own updated version of the device in eNeuro. It remains low-cost and open-course and results validating the device with preference testing are shared. Furthermore, the authors show that the two-bottle choice test apparatus can be integrated with a fiber photometry system. In the eNeuro article, Godynuyuk et al. cite Frie and Khokhar’s modifications to highlight how the design can be easily adjusted to fit investigator needs.

These two projects show how open source projects can be modified and how different groups can collaborate to improve upon designs. This shows how open source projects allow research groups can modify designs to best address their research questions instead of forming their research questions based on the commercial tools available.

Creed Lab Version 1: https://hackaday.io/project/158279-automated-mouse-homecage-two-bottle-choice-test

Creed Lab Version 2: https://hackaday.io/project/160388-automated-mouse-homecage-two-bottle-choice-test-v2

Frie and Khokar 2019 (HardwareX): https://www.sciencedirect.com/science/article/pii/S2468067219300045#b0005

Godynyuk et al 2019 (eNeuro): https://www.eneuro.org/content/6/5/ENEURO.0292-19.2019.long


Frie, J. A., & Khokhar, J. Y. (2019). An open source automated two-bottle choice test apparatus for rats. HardwareX, 5, e00061. https://doi.org/10.1016/j.ohx.2019.e00061

Godynyuk, E., Bluitt, M. N., Tooley, J. R., Kravitz, A. V., & Creed, M. C. (2019). An Open-Source, Automated Home-Cage Sipper Device for Monitoring Liquid Ingestive Behavior in Rodents. Eneuro, 6(5), ENEURO.0292-19.2019. https://doi.org/10.1523/ENEURO.0292-19.2019

Ratcave

AUGUST 29, 2019

Nicholas A. Del Grosso and Anton Sirota at the Bernstein Centre for Computational Neuroscience recently published their new project called Ratcave, a Python 3D graphics library that allows researchers to create and 3D stimuli in their experiments:


Neuroscience experiments often require the use of software to present stimuli to a subject and subsequently record their responses. Many current libraries lack 3D graphic support necessary for psychophysics experiments. While python and other programming languages may have 3D graphics libraries, it is hard to integrate these into psychophysics libraries without modification. In order to increase programming of 3D graphics suitable for the existing environment of Python software, the authors developed Ratcave.

Ratcave is an open-source, cross-platform Python library that adds 3D stimulus support to all OpenGL-based 2D Python stimulus libraries. These libraries include VisionEgg, Psychopy, Pyglet, and PyGam. Ratcave comes with resources including basic 3D object primitives and wide range of 3D light effects. Ratcave’s intuitive object-oriented interface allows for all objects, which include meshes, lights, and cameras, can be repositioned, rotated, and scaled. Objects can also be parented to one another to specify complex relationships of objects. By sending the data as a single array using OpenGL’s VAO (Vertex Array Object) functionality, the processing of drawing much more efficient. This approach allows over 30,000 vertices to be rendered at a performance level surpassing the needs of most behavioral research studies.

An advantage of Ratcave is that it allows researchers to continue to use their preferred libraries, since Ratcave supplements existing python stimulus libraries, making it easy to add on 3d stimuli to current libraries. The manuscript also reports that Ratcave has been tested and implemented in other’s research, actively showing reproducibility across labs and experiments.

Details on the hardware and software can be found at https://github.com/ratcave/ratcave.

Information on Ratcave can also be found on the https://ratcave.readthedocs.org.