Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add template of "a method section" describing each localizer #60

Open
Remi-Gau opened this issue Sep 28, 2020 · 1 comment
Open

add template of "a method section" describing each localizer #60

Remi-Gau opened this issue Sep 28, 2020 · 1 comment
Labels
documentation Improvements or additions to documentation

Comments

@Remi-Gau
Copy link
Contributor

No description provided.

@Remi-Gau Remi-Gau added the documentation Improvements or additions to documentation label Sep 30, 2020
@Remi-Gau
Copy link
Contributor Author

Remi-Gau commented Oct 3, 2020

From Moh's: https://sci-hub.st/downloads/2020-07-29/83/rezk2020.pdf#page=13&zoom=100,0,133

Visual stimuli consisted of random dot kinematograms (RDK) within an invisible circular aperture of 8 visual degrees
centered around a white fixation cross. Each visual event was composed of 300 white dots (diameter = 0.1) on a black background.
Motion dots had a speed of 4/s and a limited dot lifetime of 200 ms. The use of the limited lifetime ensures that motion direction
discrimination could be achieved by relying on the global motion perception rather than focusing on a single dot [64]. Dots moved
in one of four possible motion directions [upward, downward, rightward, and leftward] with 100% coherence level (Figure 4A).
Each visual motion event lasted 1.2 s. In the visual static condition, each static event had RDKs of 300 dots that remained static.
The location of the dots was randomized in each event.
We implemented a traditional visual motion localizer to localize hMT+/V5 both at the group-level and in each subject individually [29].
Visual motion and static conditions were generated using white random dot kinematograms (RDK) on a black background. Since the
visual stimuli were presented at fixation, our localizer defines the whole hMT+/V5 complex including both MT and MST [29]. We used
dots moving in one of four possible translational motion directions [upward, downward, rightward, and leftward]. The visual motion
localizer started with an initial 5 s of blank screen and ended with 13 s of blank screen. The run had 6 blocks of motion and static
conditions. Blocks were separated by each other with an inter-block interval (IBI) of 8 s. Each block (�15.6 s) had 12 stimuli of
1.2 s each, and an inter-stimulus interval (ISI) of 0.1 s. Each motion block had 3 repetitions of the 4 motion directions. The order
in which the motion directions were presented was randomized in each block and balanced across the different motion blocks. In
the static condition, the location of the dots was randomized for each event (inducing 12 changes within one block). The fixation cross
was presented during the whole duration of the visual localizer. To minimize eye movements and saccadic shifts [66, 67], the participants were asked to detect a brief change (150 ms) in the fixation cross color. The number of targets (range: 0-2 in each block) was
randomized and balanced across conditions. The participants performed the task while the fMRI data were acquired with an accuracy (mean ± SD) of 98.62% ± 2.36%.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant