Home » Other

Category: Other

OpenBehavior goes international! New members of the team and new content is on the way

July 2, 2020

Exciting news in an otherwise troubled time. This month, the team behind OpenBehavior is expanding and going international. We would like to welcome Jibran Khokhar and Jude Frie to the team. They are based at the University of Guelph and their lab has shared several highly imaginative open source projects to the community. Jibran and another recent member of the team Wambura Fobbs are planning some new content on educational applications of open source tools. Wambura is also leading the development of the new open source video repository. (More on that below.) Jude will be working with Samantha White and Linda Amarante on new content, and will bring his background in electrical engineering to the site.

We hope that with this expanded team we will be able to create more content and also get our new video repository live within the next two weeks. Wambura is working with Josh Wilson and Mark Laubach to get the site rolling. We received very positive responses from the community and will have videos from several sites available for you soon. If anyone would like to contribute videos, please reach out to us at openbehavior@gmail.com.

We will also be able to create longer, more in-depth posts on topics that are useful both for research, training, and education. We have wanted to get some basic tutorials on the core methods needed for working with open source tools posted, and now we have the staff to do so.

In the education space, there are many available options, such as Backyard Brains and new platforms for using recently developed microprocessors for learning about electronics, for example, littleBits. We will soon be disseminating on these tools and also writing about how open source tools are crucial for global literacy in science and for making research affordable and overcoming barriers that exist due to worldwide systemic racism.

A third line of expanded work is on developing RRIDs for open source research tools. OB has a grant pending with the NSF to support this effort, but we would like to get going on it sooner than later. RRIDs were created to enable citations of research tools (without requiring publication of a technical report) and also tracking of batches and variations in things like antibodies. They are maintained by the SciCrunch project at UCSD, and Dr Anita Bandrowski is collaborating with the OB team to support the creation of RRIDs for open source devices and programs used in neuroscience research. We have already created RRIDs for some of the most popular devices that we have posted about on OB. (Check out our paper from the summer of 2019 about those devices: https://www.eneuro.org/content/6/4/ENEURO.0223-19.2019.) We will start creating more RRIDs and will be reaching out to the community to make sure that you know when your device has an RRID. If you would like to discuss this effort with us, please get in touch with us at openbehavior@gmail.com.

In addition, we will be launching a redesign of our website this summer. It will allow for tagging devices more thoroughly and maintaining a database of available projects on the OB website. Sam White and Marty Isaacson (an undergraduate in the Laubach Lab) are working on the redesign and we will share it with you soon.

Finally, we would like to highlight a new effort by André Maia Chagas and his team called Open Neuroscience. They have created a bot for posting about open source tools. The account is on Twitter and is rocking the content. Check it out: https://twitter.com/openneurosci

Video Repository Initiative on OpenBehavior

June 11, 2020

Last fall when teaching an undergraduate course on computational methods in neuroscience at American University, we wanted to bring in some of the tools for video analysis that have been promoted on OpenBehavior. The idea was to introduce these tools at the end of the course, after the students had learned a bit about Python, Anaconda, Jupyter, Arduinos, etc. We decided on using ezTrack from the Cai Lab as it is written in Python and uses Jupyter notebooks. It was easy to prepare for this topic until we realized that we needed simple videos for tracking. Those from our lab are from operant chambers illuminated with infrared LEDs and require a good bit of preprocessing to be suited for analysis with simple tracking algorithms. In addition, we use Long-Evans rats in our studies and they are a challenge to track given their coloring. So, we looked around the web for example videos and were surprised by the lack of sharing example videos by labs who have developed, published with, and promoted tools for video analysis. Most videos that we found show the results of tracking and did not provide raw video data. We did find a nice example of open-field behavior by mice (Samson et al., 2015), and used the supplemental videos from this now 5 year old paper for the course.

These experiences made us wonder if having a collection of videos for teaching and training would be useful to the community. A collection of video recordings of animals engaged in standard neuroscience behavioral tasks (e.g. feeding, foraging, fear conditioning, operant learning) would be useful for educational purposes, e.g. students could read published papers to understand the experimental design and then analyze data from the published studies using modifications of available tutorial code for packages such as ezTrack or others. For researchers, these same videos would be useful for reproducing analyses from published studies, and quickly learning how to use published code to analyze their own data. Furthermore, with the development of tools that use advanced statistical methods for video analysis (e.g. DeepLabCut, B-SOiD), it seems warranted to have a repository available that could be used to benchmark algorithms and explore their parameter space. One could even envision an analysis competition using standard benchmark videos similar to what is available in the field of machine learning, and that have had impact on the development of powerful algorithms that go well beyond the performance of those that were available only a decade ago (e.g. xgboost).

So we are posting today to ask for community participation in the creation of a video repository. The plan is to post license-free videos to the OpenBehavior Google Drive account. Our OpenBehavior team will convert the files to a standard (mp4) format and post links to the videos on the OpenBehavior website, so they will be accessible to the community. The website will list the creator of the video file, the camera and software used for the recording, the resolution, frame rate and duration of recording, the species and information on the behavioral experiment (and a link to the publication or preprint if the work is from a manuscript).

For studies in rodents, we are especially interested in videos showing overhead views from open-field and operant arena experiments and close-up videos of facial reactions, eyeblinks, oral movements and limb reaching. We are happy to curate videos from other species (fish, birds, monkeys, people) as well.

If you are interested in participating, please complete the form on this page or reach out to us via email at openbehavior@gmail.com or Twitter at @OpenBehavior.

 

Visual stimulator with customizable light spectra

May 7, 2020

Katrin Franke, Andre Maia Chagas and colleagues have developed and shared a spatial visual stimulator with an arbitrary-spectrum of light for visual neuroscientists.


Vision research, quite obviously, relies on control of visual stimuli in an experiment. There are a great number of commercially available devices and hardware that are implemented in presenting visual stimuli to human and other species, however, these devices are predominantly developed for the visual spectrum of humans. For other species, such as drosophila, zebrafish, and rodents, their visual spectrum includes UV, and the devices used in studies sometimes fail to present this range of stimulus, and therefore often limits our understanding of the visual systems of other organisms. To address this, Franke, Chagas and colleagues developed an open source, generally low cost visual stimulator which can be customized with up to 6 chromatic channels. Given the components used to build the device, the spectrum of light can be arbitrary and customizable to be adapted to different animal models based on their visual spectrum. The details of this device, including the parts list and information for a custom python library for generating visual stimuli (QDSpy), can be found in the eLife publication. The device is tested and shown to work with stimulating the mouse retina and in vivo zebrafish studies; details on these experiments can also be found in the publication.

Check out the eLife article here!


Franke, K., Chagas, A. M., Zhao, Z., Zimmermann, M. J., Bartel, P., Qiu, Y., . . . Euler, T. (2019). An arbitrary-spectrum spatial visual stimulator for vision research. ELife, 8. doi:10.7554/elife.48779

Open Source Whisking Video Database

April 2, 2020

Alireza Azafar and colleagues at the Donders Institute for Brain, Cognition, and Behaviour at Rabound University have published an open source database of high speed videos of whisking behavior in freely moving rodents.


As responsible citizens, it’s possible you are missing lab and working from home. Maybe you have plenty to do, or maybe you’re looking for some new data to analyze to increase your knowledge of active sensing in rodents! Well, in case you don’t have that data at hand and can’t collect it yourself for a few more weeks, we have a treat for you! Azafar et al. have shared a database of 6,642 high quality videos featuring juvenile and adult male rats and adult male mice exploring stationary objects. This dataset includes a wide variety of experimental conditions including genetic, pharmacological, and sensory deprivation interventions to explore how active sensing behavior is modulated by different factors. Information about interventions and experimental conditions are available as a supplementary Excel file.

The videos are available as mp4 files as well as MATLAB matrices that can be converted into a variety of data formats. All videos underwent quality control and feature different amounts of noise and background, which makes a great tool for mastering video analysis. A toolbox is available from this group on Github for a variety of whisker analysis methods including nose and whisker tracking. This database is a great resource for studying sensorimotor computation, top-down mechanisms of sensory navigation, cross-species comparison of active sensing, and effects of pharmacological intervention of whisking behaviors.

Read more about active sensing behavior, data collection methods, and rationale here.

The database is available through GigaScience DB. 

You can also try out their Matlab Whisker Tracker from Github!


OpenMonkeyStudio

February 27, 2020

OpenMonkeyStudio is an amazing new tool for tracking movements by and interactions among freely moving monkeys. Ben Hayden and Jan Zimmerman kindly sent along this summary of the project:

Tracking animal pose (that is, identifying the positions foo their major joints) is a major frontier in neuroscience. When combined with neural recordings, pose tracking allows for identifying the relationship between neural activity and movement, and decision-making inferred from movement. OpenMonkeyStudio is a system designed to allow tracking of rhesus macaques in large freely moving environments.

Tracking monkeys is at least an order of magnitude more difficult than tracking mice, flies, and worms. Monkeys are, basically, large furry blobs; they don’t have clear body segmentations. And their movements are much richer and more complex. For these reasons, out of the box systems don’t work with monkeys.

The major innovation of our OpenMonkeyStudio is how it tackles the annotation problem. Deep learning systems aren’t very good at generalization. They can replicate things they have seen before or things that are kind fo similar to what they have seen. So the important thing is giving them a sufficiently large training set. We ideally want to have about a million annotated images. That would cost about $10 million and we don’t have that kind of money. So we use several cool tricks, which we describe in our paper, to augment a small dataset and turn it into a large one. Doing that works very well, and results in a system that can track one or even two interacting monkeys.


Check out the preprint:

OpenMonkeyStudio: Automated Markerless Pose Estimation in Freely Moving Macaques

Praneet C. Bala, Benjamin R. Eisenreich, Seng Bum Michael Yoo, Benjamin Y. Hayden, Hyun Soo Park, Jan Zimmermann

https://www.biorxiv.org/content/10.1101/2020.01.31.928861v1

Open Source Science: Learn to Fly

February 20, 2020

Yesterday, Lex and I participated in a “hack chat” over on hackaday.io. The log of the chat is now posted on the hackaday.io site. A few topics came up that we felt deserved more attention, especially the non-research uses of open source hardware developed for neuroscience applications. Today’s post is about those topics.

For me, it has become clear that there is a major need for trainees (and many faculty) to learn the basic skill set needed to make use of the open source tools that we feature on OpenBehavior. In my own teaching at American University, I run a course for undergraduates (and graduate students too, if they want to take it) that covers the basics on Python and Arduino programming, how to use Jupyter notebooks, how to connect Python with R and GNU Octave (rpy2 and oct2py), and how to do simple hardware projects with Arduinos. The students build a simple rig for running reaction time experiments, collect some data, analyze their own data, and then develop extension experiments to run on their own. We also cover a lot of other issues, like never using the jet colormap and why pandas is awesome. Last year, we partnered with Backyard Brains and brought their muscle spiker box and Neurorobots into the course, with major help from Chris Harris (and of course Greg Gage, who has been a long time supporter of open source science).

Yesterday in the chat, I learned that I have not been alone in developing such content. Andre Maia Chagas at the University of Sussex is working on his own set of tools for training folks to build and make open source devices for neuroscience research. Another site that you might check out is Lab On The Cheap. They have done a lot of posts on how to make lab equipment yourself, for a lot less than any commercial vendor will be able to charge.

In reflecting on all of these activities late last night, I was reminded of this amazing video from 2015 in which 1000 musicians play Learn to Fly by Foo Fighters to ask Dave Grohl and the Foo Fighters to come and play in Cesena, Italy. To me, the awesomeness of what is currently happening in open source neuroscience is kind of like this video. We just need to work together to make stuff happen, and we can have a blast along the way.

-Mark


Check it out: https://www.youtube.com/watch?v=JozAmXo2bDE

Open-Source Neuroscience Hardware Hack Chat

February 13, 2020

This week we would like to highlight an event hosted by Hackaday.io: The Open-Source Neuroscience Hardware Hack Chat. Lex Kravitz and Mark Laubach will be available on Wednesday, February 19, 2020 at noon Pacific Time to chat with users of Hackaday.io about open-source tools for neuroscience research.

In case you don’t know, Hackaday.io is really awesome project hosting site. Many open-source projects are hosted there that can teach you about microcontrollers, 3D printing, and other makerspace tools. It is so easy to find new ideas for projects and helpful build instructions for your projects on Hackaday.io.

We have previously posted about several popular projects that are hosted on Hackaday.io, such as SignalBuddy, PhotometryBox, and FED. (By the way, FED is now offered through collaboration between the OpenBehavior and OpenEphys projects: https://open-ephys.org/fed3/fed3.) But there are a number of other interesting projects hosted on Hackaday.io that are worth a look.

For example, FORCE (Force Output of Rodent Calibrated Effort) was developed by Bridget Matikainen-Ankney. It can be used in studies with response force controlling reward delivery in behaving rodents.

Another interesting project is the LabRATory Telepresence Robot developed by Brett Smith. It is a robotic system that allows for motion correction in imaging studies done in behaving mice using trackball setups.

Two other cool projects on Hackaday.io provide tools for studying behavior in electric fish, an electric fish detector by Michael Haag and the electric fish piano by Davis Catolico. The electric fish piano can be used to listen to, record, and manipulate the electrical tones made by these kinds of fish.

Finally, there are a couple of projects that could be useful for research and teaching labs, including a project on measuring jumping behavior by grasshoppers by Dieu My Nguyen and a rig for recording central pattern generators in snails by Nancy Sloan.

Check out these projects and let us know what you think! And hope to chat with you next Wednesday.

Open-Source Neuroscience Hardware Hack Chat


LINK: https://hackaday.io/event/169511-open-source-neuroscience-hardware-hack-chat

Camera Control

February 6, 2020

The Adaptive Motor Control Lab at Harvard recently posted their project, Camera Control, a python based camera software GUI, to Github.


Camera Control is an open-source software package written by postdoctoral fellow Gary Kane that allows video to be recorded in sync with behavior. The python GUI and scripts allows investigators to record from multiple imaging source camera feeds with associated timestamps for each frame. When used in combination with a NIDAQ card, timestamps from a behavioral task can also be recorded on the falling edge of a TTL signal. This allows video analysis to be paired with physiological recording which can be beneficial in assessing behavioral results. This package requires Windows 10, Anaconda, and Git, and is compatible with Imaging Source USB3 cameras. The software package is accessible for download from the lab’s github and instructions for installation and video recording are provided.

Find more on Github.


Kane, G. & Mathis, M. (2019). Camera Control: record video and system timestamps from Imaging Source USB3 cameras. GitHub. https://zenodo.org/badge/latestdoi/200101590

RatHat: A self-targeting printable brain implant system

JANUARY 9, 2020

Leila Allen and colleagues in Tim Allen’s lab at Florida International University recently developed RatHat, a self-targeting printable brain implant system. Below they describe their project:


“There has not been a major change in how neuroscientists approach stereotaxic methods in decades. Here we present a new stereotaxic method that improves on traditional approaches by reducing costs, training, surgical time, and aiding repeatability. The RatHat brain implantation system is a 3D printable stereotaxic device for rats that is fabricated prior to surgery and fits to the shape of the skull. RatHat builds are directly implanted into the brain without the need for head-leveling or coordinate-mapping during surgery. The RatHat system can be used in conjunction with the traditional u-frame stereotaxic device, but does not require the use of a micromanipulator for successful implantations. Each RatHat system contains several primary components including the implant for mounting intracranial components, the surgical stencil for targeting drill sites, and the protective cap for impacts and debris. Each component serves a unique function and can be used together or separately. We demonstrate the feasibility of the RatHat system in four different proof-of-principle experiments: 1) a 3-pole cannula apparatus, 2) an optrode-electrode assembly, 3) a fixed-electrode array, and 4) a tetrode hyperdrive. Implants were successful, durable, and long-lasting (up to 9 months). RatHat print files are easily created, can be modified in CAD software for a variety of applications, and are easily shared, contributing to open science goals and replications. The RatHat system has been adapted to multiple experimental paradigms in our lab and should be a useful new way to conduct stereotaxic implant surgeries in rodents.

RatHat is freely available to academic researchers, achieving open science goals. Academic and non-profit researchers interested in receiving the 3D files can contact Dr. Timothy Allen (tallen@fiu.edu). We will first provide you a simple noncommercial license to be executed by your institution, and upon completion, printable and editable 3D files of the implant system. Our responses are fast, and all files are provided immediately after receiving the aforementioned document. Our goals are noncommercial, and our only interests are to share RatHat as widely as possible in support of our open science goals and to improve the pace of discovery using chronic brain implant systems for behavioral studies.”

 


The Allen lab has provided a video tutorial on how to implant RatHat, which you can view here on youtube.

 

For more details, you can check out the preprint here.

 

Autopilot

DECEMBER 12, 2019

Jonny Saunders from Michael Wehr’s lab at the University of Oregon recently posted a preprint documenting their project Autopilot, which is a python framework for running behavioral experiments:


Autopilot is a python framework for behavioral experiments through utilizing Raspberry Pi microcontrollers. Autopilot incorporates all aspects of an experiment, including the hardware, stimuli, behavioral task paradigm, data management, data visualization, and a user interface. The authors propose that Autopilot is the fastest, least expensive, most flexibile behavioral system that is currently available.

The benefit of using Autopilot is that it allows more experimental flexibility, which lets researchers to optimize it for their specific experimental needs. Additionally, this project exemplifies how useful a raspberry pi can be for performing experiments and recording data. The preprint discusses many benefits of raspberry pis, including their speed, precision and proper data logging, and they only cost $35 (!!). Ultimately, the authors developed Autopilot in an effort to encourage users to write reusable, portable experiments that is put into a public central library to push replication and reproducibility.

 

For more information, check out their presentation or the Autopilot website here.

Additionally documentation is here, along with a github repo, and a link to their preprint is here.