The perception of movement in depth and image segmentation of motion relies on integration both over space (spatial integration) and between the two eyes (interocular integration). In a recent series of experiments, I used a steady-state EEG design to measure response thresholds from separate neural populations coding movement and segmentation. EEG presents an easy and cost-effective way of measuring population responses from the whole brain in vivo, and I have utilized this advantage to collect several large datasets with subtle variations of the same basic design, to directly test specific model-based predictions of how integration occurs, while also estimating the reproducibility of the main effects. Steady-state EEG also accommodates measurement of responses in infant observers, which I have compared to the adult data to draw conclusions about how these mechanisms change over development. In our recent publication of these experiments in Nature Communications (Kohler, Meredith & Norcia, 2018), we show that the results challenge several predictions from the literature and provide evidence for a relatively late-developing mechanism in binocular vision, which depends on both spatial and interocular integration, and cannot be accommodated by existing models of movement-in-depth. Besides our publication in Nature Communications, I have presented this work at the Vision Sciences Society conference in 2017 and 2018.
The demo aims to provide a representation of the type of stimuli we used for that paper, and allow you to manipulate the parameters in the same way we did. You can manipulate interocular integration (dot_movement can be in-phase or in anti-phase between the two eyes), spatial integration (turn reference on and off), whether the anti-phase stimulus gives rise to movement in depth (it does not when vertical is on), whether the horizontal anti-phase stimulus gives rise to plane_breaking, and whether the dots are correlated between the eyes or not (interocular_correlation). The bottom half of the demo illustrates the stimuli seen by the two eyes, and the upper right illustrates the percept (note that some percepts are hard to faithfully represent, and are thus described with words). You can also highlight the reference (dark gray) and test bars (light gray), and a single dot in each pair of bars (note that these highlights are for illustration purposes, and were never shown in the displays).