Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting with Classifiers #403

Open
bancroftg opened this issue Nov 26, 2024 · 4 comments
Open

Starting with Classifiers #403

bancroftg opened this issue Nov 26, 2024 · 4 comments

Comments

@bancroftg
Copy link

Hello! I have been successfully been using Simba for basic ROI analysis and it has been working great. I now want to start classifying some basic mouse behaviors: rearing, digging, climbing, grooming, sniffing. The 10 minute videos are a top-down view of a single mouse, tracked at 4 body-parts (nose, both ears, and center) using SLEAP.

So here are some my questions:

  1. Are there existing classifiers for these behaviors that could work for our videos? Or is it more advisable to start from scratch?
  2. How much labeling/training do you recommend for these behaviors? They tend to occur rarely or not at all in the course of a whole video.
  3. Would you recommend using the Simba behavior annotation tools or a third party software such as Ethovision for these behaviors?

Please let me know!

@goodwinnastacia
Copy link
Collaborator

Hello! Glad to hear that the ROI feature is working well for you. To start with your classifiers, I would recommend starting from scratch. You'll want to make short video clips with the behaviors present using the video trimmer tool and annotate those, either in the SimBA annotation tool or in something like BORIS if you prefer it. I would label 1-2 whole videos in addition to the short clips with the behaviors present in order to teach the classifier what frames both positive and negative for the behaviors look like. The number of frames you need to label depends on the complexity of the behaviors you're measuring. Here's a brief guide on the amount of frames that we labeled per classifier and the resulting performance:
image

@DanaeNikol
Copy link

DanaeNikol commented Dec 2, 2024

Hi there! I am also planning on labelling some aggression-related behaviors like attack, sniff, approach etc in fighting scenes of a social defeat-like paradigm. However, in my case, directionality is pretty important. I was at first thinking of encoding directionality of behavior in the names of the behaviors, however this way I end up with a very long list to annotate (I plan on using advanced labelling). I subsequently thought of combining the possibility of SIMBA extracting directionality measures, distances etc with the annotated behaviors by writing some customized scripts. Would you have any suggestions in this direction that I could start with? Or do you think this approach is going to be more complicated in the end?

@sronilsson
Copy link
Collaborator

sronilsson commented Dec 3, 2024

Hi @DanaeNikol !There are a couple of ways I have gone around this topic before. Trouble is: neither are well supported within the SimBA graphical interface, but I’m happy to help to get it running.

i) One approach is to annotate the behaviors in BORIS. BORIS allows you, when annotating, to specify a conditional subject for the behavior state. E.g., when annotating, you annotate e.g., following behavior and that animal_1 is doing the following. Once done, I have some code where you concatenate the BORIS subject and behavior fields to create animal_1_ following annotation that SimBA will accept. Happy to share the SimBA code and instructions with you I don’t know if it’s viable though, depends on how many behaviors and directions you have. Regardless, it becomes much easier than to annotate in SimBA. I was talking to some people on Gitter last year that use this approach, not sure what came out of it, could send them a message and put you in touch if needed also.

ii) If there are only 2 individuals, and the two individuals are “similar” (e.g, the two individuals can be expected to have similar distributions in movement patterns like velocities, and are similar sizes), then it can be enough to “reverse” or duplicate a classifier which has been built to recognize the behavior with one direction. For example, I have built a classifier only annotating when animal 1 approaches animal 2, and then “reverse” or duplicate the classifier to also recognize when animal 2 approaches animal 1. I wrote some functions to do this documented HERE and used it for example on slide 5 HERE. Trouble is, code was written 5 years ago, will most likely just crash lol, haven’t had much incentive to dig into it. But the general idea is that you can build a model for animal 2->animal_1 by treating animal 1 as some surrogate substitute also for animal 2 and vice versa. Again, I don’t know if it’s viable though, depends on your setup.

@DanaeNikol
Copy link

Hi @sronilsson! Thank you for the alternatives. In the paradigm I used I have a CD1 and a BL6 mouse, thus the approach with BORIS sounds a lot close to how I would imagine things, especially if these can later be imported to SIMBA for a network training. I would definitely appreciate your sharing the SIMBA code and instructions, also putting me in touch with the people that used this approach to share their experience. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants