The OpenBehavior team explored the use of three advanced video analysis tools in behavioral neuroscience during summer 2025. These tools are SLEAP for pose estimation, DeepEthogram for tracking and timing action sequences, and A-SOID for measuring movement dynamics from pose data. Our experience has been that these tools work very well together for carrying out comprehensive analyses of video data, in our case from experiments involving rodents performing operant tasks while freely moving.
The team tested these tools using videos from the OpenBehavior Video Repository and ongoing research projects in the Laubach Lab at American University. The following instructions will help other researchers integrate these tools into their workflows.
Note that these instructions are specific to PCs running the Linux Mint operating system. We highly recommend setting up PCs with Mint or other Linux distributions, as Linux provides a user-friendly platform for scientific data analysis, especially with GPU computing.
These instructions were written by Jason Blackmer, an undergraduate neuroscience and data science major at American University, and Dr. Mark Laubach, a neuroscience professor at American University. Image credit: Samantha R. White
Python Environment Setup
We prefer Mamba over Conda as a package manager for Python due to its superior speed. Detailed documentation for Mamba can be found here: https://mamba.readthedocs.io/en/latest/user_guide/mamba.html
To begin, install a minimal Python environment using the Miniforge distribution, following the instructions on this page: https://github.com/conda-forge/miniforge
Once Miniforge is installed, open a terminal and create a base environment using Python 3.12 (as many scientific packages are not compatible with Python 3.13 or 3.14):
mamba create -n my_python_env python=3.12
mamba activate my_python_env
SLEAP Setup
The following instructions have enabled reproducible installations of SLEAP on Linux PCs with GPUs within our research group.
First, configure Mamba channels:
mamba config --add channels nvidia
mamba config --add channels conda-forge
Next, update Mamba:
mamba update -n base -c conda-forge mamba
Create and activate a new Mamba environment for SLEAP with Python 3.7.12:
mamba create -n "sleap" python=3.7.12
mamba activate sleap
Install necessary dependencies:
mamba install scipy=1.4.1
mamba install ipython
mamba install cudatoolkit=11.3.1
mamba install cudnn=8.2.1
mamba install -c nvidia cuda-nvcc=11.3
mamba install tensorflow=2.7.0
You should test GPU utilization using the example provided here: https://www.tensorflow.org/tutorials/quickstart/beginner
You can monitor GPU usage with nvtop, as described here https://github.com/Syllo/nvtop.
Finally, install SLEAP via pip:
pip install sleap[pypi]
DeepEthogram Setup
The installation of DeepEthogram presented several challenges. We update the project’s installation instructions and note some required modifications to the source code to achieve a successful setup.
To begin, create and activate the DeepEthogram environment:
mamba create -n deg python=3.8
mamba activate deg
Install core dependencies:
mamba install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
mamba install ipython
mamba install pytorch-lightning=1.6.5
mamba install tensorflow=2.11.1
Then, install DeepEthogram:
pip install deepethogram
The GUI should now launch successfully:
python -m deepethogram
A crucial step for efficient training involves video conversion. MP4 files, while space-efficient, result in extremely slow training times. Therefore, it is highly recommended to convert videos for DeepEthogram to a directory of either JPGs or PNGs.
To convert videos, open a terminal in the folder containing all intended videos and launch an IPython session by typing python:
from deepethogram.projects import convert_all_videos
convert_all_videos('PATH/TO/MY/project_config.yaml', movie_format='directory', codec = '.jpg')
Additionally, specific changes were necessary within the DeepEthogram source code for proper functionality. The team set 'val' : False in line 129 of base.py.
Furthermore, because JPGs were used for video input, an extra argument had to be added within the VideoReader call of the __getitem__ function in deepethogram/datasets.py. The original line:
with VideoReader(self.videofile, assume_writer_style=True) as reader:
was modified to:
with VideoReader(self.videofile, assume_writer_style=True, filetype='.jpg') as reader:
With these adjustments, DeepEthogram now runs smoothly for the team.
A-SOID Setup
We followed the setup guide provided on the A-SOID project’s GitHub page. However, for images to display correctly during one of the steps, OpenCV needed to be uninstalled from pip and then rebuilt using Mamba.
Therefore, after installing and setting up the A-SOID environment, the following commands were executed:
pip uninstall opencv-python
mamba install -c conda-forge opencv
Furthermore, a few crucial changes were required to ensure compatibility and smooth operation of DeepEthogram and SLEAP with A-SOID. A ‘time’ column was added to the DeepEthogram label data:
import numpy as np
import pandas as pd
df = pd.read_csv('PATH/TO/labels.csv', index_col= 0)
df['time'] = df.index*0.04 # This is just 1/framerate, our labels were on 25fps.
df.to_csv(path_or_buf='PATH/TO/labels.csv')
```
Secondly, a significant issue arose in the A-SOID manual refinement phase. A function from the preprocessing script needed to be edited because the adp_filt function was narrowing the pose data to the selected keypoints but not similarly processing the likelihood (llh) values. This led to mismatched DataFrame dimensions if not all keypoints were selected, causing the operation to fail. To rectify this, the following lines were added to the adp_filt definition in A-SOID/asoid/utils/preprocessing.py:
def adp_filt(pose, idx_selected, idx_llh, llh_value):
### NEW LINE:
idx_llh_selected = [int(i)+2 for i in idx_selected[0::2]]
# Unaltered
datax = np.array(pose.iloc[:, idx_selected[::2]])
datay = np.array(pose.iloc[:, idx_selected[1::2]])
### MODIFIED THE REFERENCE HERE:
data_lh = np.array(pose.iloc[:, idx_llh_selected])
# Formerly: data_lh = np.array(pose.iloc[:, idx_llh])
This modification completely resolves the issue, allowing both DeepEthogram and SLEAP to be used effectively with A-SOID.