This indicates that a classifier trained only on pictures of separately presented faces and places may not be the most optimal way of decoding object-based visual attention. Concluding, we have shown that real-time fMRI allows for online prediction of attention to objects belonging to different object categories. Prediction is based on distributed patterns of activity in multiple brain regions. The outlined methodology not only allows us to probe object-based attention in an online setting see more but also illustrates the potential to develop BCIs that are driven
by modulations of high-level cognitive states. The authors gratefully acknowledge the support of the BrainGain Smart Mix Programme of the Netherlands Ministry of Economic Affairs and the Netherlands Ministry of Education, Culture and Science. The first
author was supported by a UTS grant from the University of Twente. We thank Paul Gaalman for his technical support during the experimental setup and development of the real-time fMRI pipeline. We are very grateful to the editors and the anonymous reviewers for their encouraging and constructive comments on our manuscript. Abbreviations aMTG anterior medial temporal gyrus BCI brain–computer interface BOLD blood oxygen level-dependent FFA fusiform face area fMRI functional magnetic resonance imaging GLM general linear model MoCo motion-corrected MVA-C cluster-wise Selleckchem PI3K Inhibitor Library multivariate analysis MVA-G GLM-restricted multivariate analysis MVA-T mean timeseries multivariate analysis MVA-W whole-brain multivariate analysis MVPA multivoxel pattern analysis OFA occipital face area PACE prospective acquisition correction TR repetition time
Fig. S1. A basis set of 15 face-place pairs used in decoding phase. Each pair was used twice in each condition, once with the face picture set as target and the other time with the place picture set as target. Note: Copyrighted pictures used in the original experiment have been replaced in the above graphic by their non-copyrighted look-alikes. Fig. S2. Graph-based visual saliency algorithm was used to select the face-place pairs. Saliency of the 50/50 hybrid and each of its constituents were fantofarone observed and only those pairs were selected for which the 50/50 hybrid had an equal number of salient points for the face and place picture. Fig. S3. Stimulus timeline. (A) Example of an attend-face trial in non-feedback condition. (B) Example of an attend-place trial in feedback condition. After cues have been presented, the face-place hybrid image was updated every TR depending on classification result of the preceding TR. Fig. S4. List of all brain regions from which voxels were selected by the MVA-W classifier for training. Only regions that were activated across three or more subjects were used for further analyses. Fig. S5.