Home » Most Recent

Category: Most Recent

Open Source Joystick

March 26, 2020

This week we want to talk about joy! I mean, joy-sticks. Parley Belsey, Mark Nicholas and Eric Yttri have developed and shared an open-source joystick for studying motor behavior and decision making in mice!


Mice are hopping and popping in research, and so researchers are using more creativity and innovation to understand the finite aspects of their behaviors. Recently, members of the Yttri lab at Carnegie Mellon used their skills to create an open source joystick for studying mouse motor and decision making behaviors! In their paper they describe the full behavioral set up (based on the RIVETS design from the Dudman lab), featuring a removable head fixation point, a sipping tube, and a joystick to measure reach trajectory, amplitude, speed, etc. Data is collected and devices are controlled via an Arduino, solenoid circuit, microSD card reader, and LCD readout, and data can be analyzed in real time or saved to a csv for analysis later. The Arduino can be programmed to signal reward delivery when a correct response is recorded from the joystick which streamlines outcome based reward delivery. Belsey et al. tested their device with adult mice, and the results of training can be found in the paper as well as the full build instructions and ideas for how their tool may be of interest to build and use in your lab.

For more, check out their publication or Github!


Belsey, P., Nicholas, M. A., & Yttri, E. A. (2020). Open-source joystick manipulandum for decision-making, reaching, and motor control studies in mice. Eneuro. doi: 10.1523/eneuro.0523-19.2020

Robotic Flower System for Bee Behavior

March 19, 2020

Erno Kuusela and Juho Lämsä, from the University of Oulu in Finland, have shared their design for an open source, computer controlled robotic flower system for studying bumble bee behavior.


Oh.. to be a honey bee.. collecting nectar from a robotic flower.. of open source design… splendid. As with behavioral studies from species common to neuroscience (rodents to drosophila to humans or zebrafish, etc), data collection for behavioral studies in bees can be time-consuming and sensitive to human error. Thanks to the growth in the open source movement, it’s easier than ever to develop hardware and software to automate such studies, which is what Kuusela and Lämsä have demonstrated in their publication. They developed a system of robotic flowers to study bee behavior. Their design features a control unit, based on an Arduino Mega 2560, which can collect data from and send inputs to up to 32 individual robotic flowers. Each flower contains its own servo controlled refill system. The nectar cup (in this design, a phillips screw head that can hold 1.7 uL!) is attached to servomotor’s shaft via a servo horn which, when prompted by the program, dips the cup into the flower’s individual nectar reservoir. The flower is designed in a way to capture data when an animal is feeding by the placement of IR beams that are broken when engaged on the flower’s feeding mechanism and sends data to the control unit. A covering on the system can be marked with symbols to attract bees. Custom control software is available on an open source license to be used as is, or modified to fit an experimenter’s needs. While developed and tested with bumble bees, the system can also be adapted for a number of species.

Read more about specifics of this system in Kuusela & Lämsä (2016). The circuit diagrams, parts list, and control software and source code are available in the paper’s supplemental information.


ToneBox: a system for automated auditory operant conditioning

March 12, 2020

Nikolas Francis and colleagues from the Kanold Laboratory at the University of Maryland – College Park have developed and shared ToneBox, a system for high-throughput operant conditioning in a mouse homecage.


Data collection for operant conditioning studies can be time-consuming and sensitive to variability in experimental conditions. An automated approach would both reduce experimental variability and allow for high-throughput training and data collection. To solve this, Francis et al. developed and shares a user-friendly automated system for training up to hundreds of mice in an auditory behavioral task. This system features a custom Matlab GUI running on a desktop which connects to a Raspberry Pi to administer auditory stimuli from a USB soundcard and collects data auditory data as well as licking data from a water spout with commercially available capacitive sensors. The parts list and build instructions are readily available from the paper and associated GitHub repository and are straight-forward enough for new users to build and operate without extensive coding or design experience. This system can collect data 24/7 for weeks on end, which allows for insight into behaviors not otherwise observable by experimenters who need to eat and sleep instead of observing mice at all hours.

The developers show how their tool can be used by training 24 C57BL/6J mice to perform auditory behavioral tasks. They report their results and find that circadian rhythms modulated overall behavioral activity as expected for nocturnal animals. Details about this study, as well as about building the device and where to find resources to create your very own ToneBox can be gleaned from reading their full paper on eNeuro.

Read the paper here, check out the GitHub repository here, or keep up to date with the latest from the Kanold Lab by visiting their site!


FED Development Interview

March 5, 2020

One of the co-founders of OpenBehavior, Lex Kravitz recently participated in an interview about the development of the Feeding Experiment Device, lovingly known as FED! Read more to learn about how the project started, how the device has improved and what’s next for the project, including a partnership with the OpenEphys team. Perhaps this will inspire and guide you through your own project development!


1. Tell us about FED: What is it intended to do? What was its initial purpose for use in your lab?
FED is a home-cage compatible device for training mice!  FED stands for the Feeding Experimentation Device, the name is also a pun: it was invented while my lab was at the NIH, so the FED was made by the Feds 🙂 
The initial purpose of FED was to measure feeding, and quantify daily calories eaten by mice living in home cages. Our lab studies feeding and obesity and the automated methods that exist for measuring food intake (typically small weighing scales that attach to the cage) were too expensive for our lab to purchase enough of them for our studies, which often involved >50 mice at a time.  We figured other people might have the same need so we decided to invent something to do this.  We settled on a pellet dispensing approach because we can buy pellets in precision sizes (20mg each), which means that instead of weighing the food we can calculate the calories from the number of pellets eaten.  From an engineering standpoint counting pellets seemed easier than scales too.  I was very fortunate to have a postbac Katrina Nguyen in my lab with a background in BioMedical Engineering, and she spearheaded the development of our first device, which we called FED.  After she left the lab another postbac, Mohamed Ali, worked with me to create the FED2, which was a slightly refined version of FED, and FED2 eventually evolved into the FED3 model that we’re working with today.

 

2. Can you briefly describe the components of the device, what it is made of, how long it takes to make one?

FED is a “smart” pellet dispenser.  At its core it’s a 3D printed pellet dispenser, which also has a microcontroller board inside, the Adafruit M0 Adalogger, which is an extremely capable microcontroller.  In addition to dispensing pellets, FED3 has two “nosepokes” for mice to interact with it, 8 multicolored LEDs, an audio generator, a screen for user feedback, and a programmable output for synchronizing FED3 behavior with other techniques like optogenetics or fiber photometry. FED3 is small enough to fit inside of rodent home cages and does not require a connected computer to operate.  FED3 is designed to simplify rodent training, it ships with 12 built-in programs, including fixed-ratio and progressive-ratio operant training routines, as well as optogenetic self-stimulation programs. Most importantly, FED3 is open-source and it can be hacked and re-programmed by users to achieve new functionality. It takes me about 1-2 hours to put one together from scratch, it would probably take a new user 3-4 hours.  The electronics parts cost  about $135, and the Open Ephys Production Site is selling assembled electronics for a small markup.  With the electronics and printed parts in hand it takes about 15 minutes to assemble.

3. Since initial development of the device, what general improvements have been made?
Since Katrina Nguyen made the first FED, we worked a lot to improve reliability of the device.  We envisioned this being an “always on” device that can sit in a cage and measure food intake around the clock.  It holds about 10 days worth of pellets and the battery lasts about a week, so in theory this is possible. The major challenge, however, is that it is a mechanical pellet dispenser that is prone to issues such as jamming or dispensing errors. In practice we clean and test the FEDs each day when we run multi-day studies.
One issue with the first FED was that we had no way to get user feedback, so if it jammed the only way we’d even know was if we opened the cage and tried to get a pellet.  For this reason we put a screen on FED2, so we could see how many pellets were dispensed and also get an error message when it was jammed.  For FED3 we added more functions, such as the two “nosepokes” for the mouse to interact with FED to run full home-cage operant tasks.  Finally, we added a synchronized BNC output connector, which let’s us sync the nosepoke and pellet data with ephys or fiber photometry. Several groups we know are doing this to synchronize fiber photometry recordings with pellet retrieval, or generate pulses for optogenetic self-stimulation.

4. Switching gears from the actual device to production and replication, what have you seen as necessary steps to getting other labs / researchers successfully using your device (documentation, forums, contact email, etc)?
In terms of my own work to get FED3 into other people’s hands, I think the most important thing was to get other people to start using it.  So I’ve tried to keep our online documentation for FED3 complete and up to date.  I also give a lot of them out in exchange for feedback – basically it’s important for other people to try building and using a device to discover its flaws that you might not realize because you’ve worked on it too long.  I don’t wait until the devices were perfect before I started giving give away so I could find out what worked and what didn’t for new users. 

5. Since updating and producing FED3, you’ve decided to connect with OpenEphys production team to sell the device on their website. Why did you decide to now sell the device and have OpenEphys distribute it? What are the advantages (to you as the developer, to OpenEphys, and to the potential users) of deciding to put the device on OpenEphys for production?
It is very challenging to distribute a hardware device as a researcher – there aren’t good revenue streams for funding development of devices like this and there aren’t good logistics for distributing them through a University setting.  It also takes a lot of time to communicate with people, mail out devices, and provide support.  Really, all of this are better suited to a company, and for several reasons I don’t want to start one. 
Therefore, I jumped at the chance to work with the Open Ephys Production Site to take on these distribution challenges!   You can see their FED3 sales page here.  Working with them has been really fun and productive – I mailed them a FED3 and they first did a thorough evaluation of the electronics and found several things they could improve in my design to make it more robust and better for manufacture.  So we worked together to implement and test those improvements. The main advantage for me is that working with them amplifies the effort our team did on this device.  I would love to see cheap and easy operant training available in every lab.  I also support their broader mission of distributing open source tools in neuroscience.  I’ll take this moment to make a brief (no) conflict of interest statement, I’m not receiving any funding back from their sales of FED3, the proceeds are all going to support their efforts to distribute FED3 and other open source hardware.

6. Describe the connection between you as a developer and OpenEphys production. Can / should others try to “pitch” their device to OpenEphys (or to other companies out there for production)?
Working with OEPS was very collaborative.  We started by evaluating whether the FED3 was something they wanted to sell, and they quickly determined it was. Filipe Carvalho then went through the design files and made some recommendations for things he wanted to change to improve manufacturability.  For example, our version relied heavily on Adafruit breakout boards, whereas he engineered the relevant chips onto the FED3 printed circuit board assembly.  This makes FED3 easier to manufacture and more reliable.  We worked together on all changes, and it was a really productive collaboration.  I would definitely suggest others “pitch” their ideas to OEPS!  Even if it doesn’t work out for OEPS to distribute the idea I’m sure they’ll get good feedback and learn something about manufacturing and distributing open source hardware.

7. What general advice or instructions do you have for other researchers looking to develop their devices in a similar way. Please let us know of anything you’d like to share with the open-source neuroscience world.
I would advise people to share their designs early and often.  Don’t wait until the design is perfect (it never will be)!  The easier you make it for others to use your device the more of a chance they’ll see the value in it that made you create it in the first place. Try to document online as you go, ie: by using github.com or hackaday.io to keep a record of changes you make and validation experiments you perform.  It can be difficult to go back and create documentation after the fact, but if you do it as you go you have a built-in record for what you did, and why you made the design decisions you made. If you keep up documentation on the way it can also turn into something that you can publish as a methods paper when it’s ready for that.  So in summary – share early and share often, and document your work!

OpenMonkeyStudio

February 27, 2020

OpenMonkeyStudio is an amazing new tool for tracking movements by and interactions among freely moving monkeys. Ben Hayden and Jan Zimmerman kindly sent along this summary of the project:

Tracking animal pose (that is, identifying the positions foo their major joints) is a major frontier in neuroscience. When combined with neural recordings, pose tracking allows for identifying the relationship between neural activity and movement, and decision-making inferred from movement. OpenMonkeyStudio is a system designed to allow tracking of rhesus macaques in large freely moving environments.

Tracking monkeys is at least an order of magnitude more difficult than tracking mice, flies, and worms. Monkeys are, basically, large furry blobs; they don’t have clear body segmentations. And their movements are much richer and more complex. For these reasons, out of the box systems don’t work with monkeys.

The major innovation of our OpenMonkeyStudio is how it tackles the annotation problem. Deep learning systems aren’t very good at generalization. They can replicate things they have seen before or things that are kind fo similar to what they have seen. So the important thing is giving them a sufficiently large training set. We ideally want to have about a million annotated images. That would cost about $10 million and we don’t have that kind of money. So we use several cool tricks, which we describe in our paper, to augment a small dataset and turn it into a large one. Doing that works very well, and results in a system that can track one or even two interacting monkeys.


Check out the preprint:

OpenMonkeyStudio: Automated Markerless Pose Estimation in Freely Moving Macaques

Praneet C. Bala, Benjamin R. Eisenreich, Seng Bum Michael Yoo, Benjamin Y. Hayden, Hyun Soo Park, Jan Zimmermann

https://www.biorxiv.org/content/10.1101/2020.01.31.928861v1

Open Source Science: Learn to Fly

February 20, 2020

Yesterday, Lex and I participated in a “hack chat” over on hackaday.io. The log of the chat is now posted on the hackaday.io site. A few topics came up that we felt deserved more attention, especially the non-research uses of open source hardware developed for neuroscience applications. Today’s post is about those topics.

For me, it has become clear that there is a major need for trainees (and many faculty) to learn the basic skill set needed to make use of the open source tools that we feature on OpenBehavior. In my own teaching at American University, I run a course for undergraduates (and graduate students too, if they want to take it) that covers the basics on Python and Arduino programming, how to use Jupyter notebooks, how to connect Python with R and GNU Octave (rpy2 and oct2py), and how to do simple hardware projects with Arduinos. The students build a simple rig for running reaction time experiments, collect some data, analyze their own data, and then develop extension experiments to run on their own. We also cover a lot of other issues, like never using the jet colormap and why pandas is awesome. Last year, we partnered with Backyard Brains and brought their muscle spiker box and Neurorobots into the course, with major help from Chris Harris (and of course Greg Gage, who has been a long time supporter of open source science).

Yesterday in the chat, I learned that I have not been alone in developing such content. Andre Maia Chagas at the University of Sussex is working on his own set of tools for training folks to build and make open source devices for neuroscience research. Another site that you might check out is Lab On The Cheap. They have done a lot of posts on how to make lab equipment yourself, for a lot less than any commercial vendor will be able to charge.

In reflecting on all of these activities late last night, I was reminded of this amazing video from 2015 in which 1000 musicians play Learn to Fly by Foo Fighters to ask Dave Grohl and the Foo Fighters to come and play in Cesena, Italy. To me, the awesomeness of what is currently happening in open source neuroscience is kind of like this video. We just need to work together to make stuff happen, and we can have a blast along the way.

-Mark


Check it out: https://www.youtube.com/watch?v=JozAmXo2bDE

Open-Source Neuroscience Hardware Hack Chat

February 13, 2020

This week we would like to highlight an event hosted by Hackaday.io: The Open-Source Neuroscience Hardware Hack Chat. Lex Kravitz and Mark Laubach will be available on Wednesday, February 19, 2020 at noon Pacific Time to chat with users of Hackaday.io about open-source tools for neuroscience research.

In case you don’t know, Hackaday.io is really awesome project hosting site. Many open-source projects are hosted there that can teach you about microcontrollers, 3D printing, and other makerspace tools. It is so easy to find new ideas for projects and helpful build instructions for your projects on Hackaday.io.

We have previously posted about several popular projects that are hosted on Hackaday.io, such as SignalBuddy, PhotometryBox, and FED. (By the way, FED is now offered through collaboration between the OpenBehavior and OpenEphys projects: https://open-ephys.org/fed3/fed3.) But there are a number of other interesting projects hosted on Hackaday.io that are worth a look.

For example, FORCE (Force Output of Rodent Calibrated Effort) was developed by Bridget Matikainen-Ankney. It can be used in studies with response force controlling reward delivery in behaving rodents.

Another interesting project is the LabRATory Telepresence Robot developed by Brett Smith. It is a robotic system that allows for motion correction in imaging studies done in behaving mice using trackball setups.

Two other cool projects on Hackaday.io provide tools for studying behavior in electric fish, an electric fish detector by Michael Haag and the electric fish piano by Davis Catolico. The electric fish piano can be used to listen to, record, and manipulate the electrical tones made by these kinds of fish.

Finally, there are a couple of projects that could be useful for research and teaching labs, including a project on measuring jumping behavior by grasshoppers by Dieu My Nguyen and a rig for recording central pattern generators in snails by Nancy Sloan.

Check out these projects and let us know what you think! And hope to chat with you next Wednesday.

Open-Source Neuroscience Hardware Hack Chat


LINK: https://hackaday.io/event/169511-open-source-neuroscience-hardware-hack-chat

Camera Control

February 6, 2020

The Adaptive Motor Control Lab at Harvard recently posted their project, Camera Control, a python based camera software GUI, to Github.


Camera Control is an open-source software package written by postdoctoral fellow Gary Kane that allows video to be recorded in sync with behavior. The python GUI and scripts allows investigators to record from multiple imaging source camera feeds with associated timestamps for each frame. When used in combination with a NIDAQ card, timestamps from a behavioral task can also be recorded on the falling edge of a TTL signal. This allows video analysis to be paired with physiological recording which can be beneficial in assessing behavioral results. This package requires Windows 10, Anaconda, and Git, and is compatible with Imaging Source USB3 cameras. The software package is accessible for download from the lab’s github and instructions for installation and video recording are provided.

Find more on Github.


Kane, G. & Mathis, M. (2019). Camera Control: record video and system timestamps from Imaging Source USB3 cameras. GitHub. https://zenodo.org/badge/latestdoi/200101590

Rigbox: an open source toolbox for probing neurons and behavior

January 30, 2020

In a recent preprint, Jai Bhagat, Miles J. Wells and colleagues shared a toolbox, developed by Christopher Burgess, for streamlining behavioral neuroscience experiments.


In behavioral neuroscience, it’s important to keep track of both behavioral data and neural data, and have it done so in a way that makes analysis simpler later on. One of the best ways to achieve this is by having a centralized system for running behavioral and neural recording software while streaming all the data. To address this, Burgess and team developed Rigbox, a high-performance, open-source software toolbox that facilitates a modular approach to designing experiments. Rigbox runs in MATLAB (with some Java and C for network communication and processing speed improvements), and its main submodule, Signals, allows intuitive programming of behavioral tasks. While it was originally developed for behavioral analysis from mice in a steering wheel driven task, the authors show its feasibility for human behavioral tasks (psychophysics & pong game), highlighting the broad array of ways this toolbox can be used in neuroscience.
For more, check out the full preprint!
Or jump right in on Github.


SimBA

JANUARY 23, 2020

Simon Nilsson from Sam Golden’s lab at the University of Washington recently shared their project SimBA (Simple Behavioral Analysis), an open source pipeline for the analysis of complex social behaviors:


“The manual scoring of rodent social behaviors is time-consuming and subjective, impractical for large datasets, and can be incredibly repetitive and boring. If you spend significant time manually annotating videos of social or solitary behaviors, SimBA is an open-source GUI that can automate the scoring for you. SimBA does not require any specialized equipment or computational expertise.

SimBA uses data from popular open-source tracking tools in combination with a small amount of behavioral annotations to create supervised machine learning classifiers that can then rapidly and accurately score behaviors across different background settings and lighting conditions. Although SimBA is developed and validated for complex social behaviors such as aggression and mating, it has the flexibility to generate classifiers in different environments and for different behavioral modalities. SimBA takes users through a step-by-step process and we provide detailed installation instructions and tutorials for different use case scenarios online. SimBA has a range of in-built tools for video pre-processing, accessing third-party tracking models, and evaluating the performance of machine learning classifiers. There are also several methods for in-depth visualizations of behavioral patterns. Because of constraints in animal tracking tools, the initial release of SimBA is limited to processing social interactions of differently coat colored animals, recorded from a top down view, and future releases will advance past these limitations. SimBA is very much in active development and a manuscript is in preparation. Meanwhile, we are very keen to hear from users about potential new features that would advance SimBA and help in making automated behavioral scoring accessible to more researchers in behavioral neuroscience.”


For more information on SimBA, you can check out the project’s Github page here.

For those looking to contribute or try out SimBA and are looking for feedback, you can interact on the project’s Gitter page.

Plus, take a look at their recent twitter thread detailing the project.

If you would like to be added to the project’s listserv for updates, fill out this form here.