Replies: 2 comments 3 replies
-
Hey welcome to Bonsai :) Just a preface, next time try to divide your post into multiple smaller ones. It will likely increase the chances of people answering individual questions much quicker. It will also make recycling content easier for the community. The KeyDown file tells us at what second time we pressed the respective key. Because our program marks time continuously throughout each program (as in if one experiment ends at 15.5 s, the next experiment will continue counting from this time), it is hard to match this time up with the video footage. Any recommendations on how to sync our fiber photometry data and the video footage? Can you include this workflow? How are you recording the photometry signal? Does this piece of hardware have access to an external digital input? Would a jitter of a couple of dozen of milliseconds be acceptable in your alignment? What camera are you using? With the KeyDown feature, when imputing my fiber photometry data into the Python program I am using to analyze it, should I truncate the data according to the times listed on the KeyDown file? When trying to analyze the video data, we need to mark the different types of zones so the program can tell us how many times the mouse goes into each zone (two corners and interaction zone). How can I figure out the pixel coordinates to input into my Bonsai program? In each video, I was recording two arenas with the same webcam simultaneously. Thus, each video contains two separate experimental mice that I want to analyze individually. Is it possible to analyze each experiment/arena all at once? To do this, would I need to add three more "Regions of Interest"? How do I program the Bonsai workflow to recognize the mouse in the arenas? When I tried inputting coordinates previously with a test video, the Bonsai program generated an excel file with two columns of numbers and columns of "true/false;" however there were no column titles (attached). Does anyone know how to interpret this with respect to my Bonsai workflow?
If you want to add an header to your csv make sure to check the
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello all,
In my lab, we are running a social interaction/preference/avoidance test with our mice. Essentially what this entails is we have a "bully mouse" that is restricted to a mesh cage against a wall in an arena. Our test mice are allowed to run freely in the arena. There is an interaction zone surrounding the mesh cage and two "corners" that are in the corners opposite the mesh cage (see image below for diagram of experiment).
We recorded a webcam video of the experiments and recorded neuron activity using fiber photometry all through a Bonsai workflow. We used the KeyDown feature to note when the mice went into the arena and the experiment ended. I have the following questions:
The KeyDown file tells us at what second time we pressed the respective key. Because our program marks time continuously throughout each program (as in if one experiment ends at 15.5 s, the next experiment will continue counting from this time), it is hard to match this time up with the video footage. Any recommendations on how to sync our fiber photometry data and the video footage?
With the KeyDown feature, when imputing my fiber photometry data into the Python program I am using to analyze it, should I truncate the data according to the times listed on the KeyDown file?
When trying to analyze the video data, we need to mark the different types of zones so the program can tell us how many times the mouse goes into each zone (two corners and interaction zone). How can I figure out the pixel coordinates to input into my Bonsai program? (see the Bonsai workflow: https://drive.google.com/file/d/1-b0rlSdyaaiVTPYLZ1qOXY9QVbsq5XDe/view?usp=sharing)
In each video, I was recording two arenas with the same webcam simultaneously. Thus, each video contains two separate experimental mice that I want to analyze individually. Is it possible to analyze each experiment/arena all at once? To do this, would I need to add three more "Regions of Interest"?
How do I program the Bonsai workflow to recognize the mouse in the arenas?
When I tried inputting coordinates previously with a test video, the Bonsai program generated an excel file with two columns of numbers and columns of "true/false;" however there were no column titles (attached). Does anyone know how to interpret this with respect to my Bonsai workflow?
Thank you all in advance for your help!
Best,
Stephanie Urod
4460L_behR_FP-video-testrun-062722.2_bonsai.csv
a
Beta Was this translation helpful? Give feedback.
All reactions