Expect The Unexpected Online Free 'LINK'
One outstanding question, however, is how these neural signals encoding predictions and their violation (PEs) are modulated by visual attention (Summerfield and Egner, 2009). A canonical view is that attention acts as a filter, suppressing irrelevant information to focus on the most relevant signals (Broadbent, 1958). For example, visual search is facilitated if unanticipated information is suppressed (Seidl et al., 2012). Accordingly, attention might mitigate the influence of unexpected information by dampening visual PE signals (Rao and Ballard, 2005), which would obviate the reconciliation of expected and observed information, thus reducing the net disparity between neural signals for expected and unexpected percepts (the PE-suppression model). Because expected and unexpected stimuli are associated with distinct fMRI multivoxel patterns (Kok et al., 2012a; de Gardelle et al., 2013), the PE-suppression model predicts that attention will impair our ability to decode whether a stimulus was expected or unexpected. Another, complementary view is that attention promotes learning about the statistical structure of the world (Zhao et al., 2013), with classic theories proposing that attention increases the rate at which stimulus-stimulus associations are acquired (Rescorla and Wagner, 1972; Pearce and Hall, 1980). Under this view, attention acts not to suppress but to enhance PEs, acting as a multiplicative scaling factor on the impact of PEs on subsequent predictions (Feldman and Friston, 2010), which should increase (rather than decrease) the disparity of multivoxel patterns associated with expected and unexpected information (the PE-promotion model).
Expect the unexpected online free
Experimental protocol and predictions. A, Timeline of an example nontarget trial. An auditory cue preceded each visual stimulus (face or scene), followed by a jittered intertrial interval. A reminder of the current target category (here: outdoor scenes) remained on screen throughout each block. B, Two versions of tone-picture associations were used, with each subject experiencing one version only. In version 1 (Ver. 1), a rising (falling) tone indicated a probability of 75% that the forthcoming nontarget stimulus was a male face (outdoor scene), respectively. In version 2 (Ver. 2), a rising (falling) tone indicated a probability of 75% that the forthcoming nontarget stimulus was an indoor scene (female face), respectively. C, Schematic illustration of model predictions: each disk represents the multivariate voxel pattern associated with a given experimental condition and the overlap between disks represents the degree of pattern similarity. If attention promotes error signals, then this should render the representations of unexpected stimuli more different from those of expected stimuli (left cluster), whereas the opposite would hold for attentional suppression of prediction errors (right cluster).
To extract multivariate information content, the same models were fit to unsmoothed preprocessed images in their native resolution to reduce the blending of information patterns in the raw fMRI data. Then, for each trial type in each subject, a one-sample t test across runs was performed to produce a t-image. The t-images were further normalized across trial types by removing from each condition the cross-condition mean and dividing the resulting values by the cross-condition SD. This normalization removed trial-type-independent, individual baseline activity that may confound the leave-one-subject-out cross-validation while retaining the activation differences between trial types. As a result, for each subject, we obtained one pattern (i.e., one t-image) for each nontarget trial type. Therefore, although there was a higher total number of expected than unexpected trials, these trial types equally contributed a single t-image per subject to the pattern classification analyses, so this analysis was not biased by unbalanced data points between trial types. The resulting t-images were defined as features containing task-relevant information (Jiang and Egner, 2013) on which a searchlight MVPA (Kriegeskorte et al., 2006) was conducted. Each searchlight was a spherical cluster with a radius of 2 voxels (6 mm) and contained up to 33 cortical voxels. A linear support vector machine (SVM) was used as the classifier and a default constrain value of 1 was used for all SVMs. The performance of SVMs was evaluated with an iterative leave-one-subject-out cross-validation procedure. After searchlight MVPA, a group classification accuracy image was obtained, in which each gray matter voxels encoded the average classification accuracy of the searchlight centered at that voxel.
where N is the number of cases in MVPA (e.g., 21 subjects 2 classes 2 cases [e.g., attended vs unattended trials and/or expected vs unexpected trials]). p(xO1) can be calculated using Bayes' rule as follows:
To determine whether attention and expectation boosted category selectivity by shared or distinct mechanisms, we trained searchlights that displayed attentional enhancement of category selectivity in the above analysis to discriminate between attended face and scene stimuli and then tested these classifiers' ability to discriminate these categories in unattended/expected and unattended/unexpected trials. The converse analysis was also performed, training searchlights of significant expectation-enhanced category selectivity on expected trials and testing their ability to discriminate attended or unattended unexpected stimuli. Finally, we tested the modulation of attention on the effects of expectation. Here, instead of classifying between face and scene stimuli, we trained classifiers to distinguish between expected and unexpected stimuli. Then, we used the same approach as above to test whether expected stimuli could be distinguished from unexpected stimuli with higher accuracy under attended than unattended conditions or vice versa. To assess categorical stimulus specificity of the FFA/PPA, two MVPAs were conducted, one using only face trials and one using only scene trials. All MVPA results were corrected for multiple comparisons at p
However, the mere fact that MVPA was more successful at decoding expected versus unexpected stimuli under attended conditions does not imply that the signals exploited by the classifiers for this discrimination were actually multivariate in nature nor that they relied on interspersed voxels of differential sensitivities to prediction and prediction error signals. Instead, our results might simply reflect a univariate (mean signal) advantage for the attended/expected or attended/unexpected conditions in category-specific brain regions. To rule out this possibility, we explored the data at the univariate level. First, we analyzed activation estimates (collapsed across FFA and PPA) in terms of mean MVPA feature values (i.e., t-values of activation normalized across experimental conditions within each subject) in a conventional ANOVA involving the factors of stimulus category, attention, and expectation (Fig. 4A). We observed a main effect of stimulus category (F1,20 = 117.6, p
Our study shares similarities with recent work by Kok et al (2012a) in which enhanced decoding of grating orientations was observed in V1 when orientations were validly cued (i.e., expected). Notably, that study found independent, noninteracting decoding benefits of attention and expectation, whereas we here report the decoding of expected versus unexpected stimuli to be enhanced by attention. Although these results are superficially contradictory, the two studies addressed distinct questions: Kok et al. (2012a) were interested in decoding the identity of particular gratings rather than decoding their status of being expected or unexpected. In contrast, we investigated whether the neural differentiation between expected versus unexpected stimulus category members would be enhanced or suppressed by attention. Although the two studies' implications for attention-expectation relations are therefore not directly comparable, the current study nevertheless extends Kok et al.'s findings of expectation-based enhancement of category selectivity from simple stimulus features in early visual cortex to complex object representations at higher levels of the ventral visual stream. Specifically, Kok et al. (2012a) found that expectation benefited the decoding of grating orientations in V1, but not in areas V2 and V3. The investigators proposed two potential explanations: either improved decoding in V1 reflected that region's preference for simple oriented stimuli or higher visual regions are generally less susceptible to predictive processing. Our results argue against the latter possibility, because expectation greatly enhanced classification accuracy in high-level areas of the ventral visual stream. Therefore, enhanced selectivity for expected features appears to be a general purpose mechanism by which context modulates perception across the visual hierarchy.