001). We used a linear support vector machine (SVM) for BSC of both category perception experiments.
After hyperalignment using parameters derived from the movie data, BSC identified the seven face and object categories with 63.9% accuracy (SE = 2.2%, chance = 14.3%; Figure 2A). The confusion matrix (Figure 2B) shows that the classifier distinguished human faces from nonhuman animal faces and monkey faces from dog faces but could not distinguish human female from male faces. The classifier also could distinguish chairs, shoes, and houses. Confusions between face and object categories were rare. WSC accuracy (63.2% ± 2.1%) was equivalent to BSC of hyperaligned data with a similar Apoptosis inhibitor pattern of confusions, but BSC of anatomically aligned data (44.6% ± 1.4%) was significantly worse (p < 0.001; Figure 2). After hyperalignment using parameters derived from the movie data, BSC identified the six animal species with 68.0% accuracy (SE = 2.8%, chance = 16.7%; Figure 2A). The confusion matrix shows that the classifier could identify Sunitinib research buy each individual species and that confusions were most often made within class, i.e., between insects, between birds, or between primates. WSC accuracy (68.9% ±
2.8%) was equivalent to BSC of hyperaligned data with a similar pattern of confusions. BSC of anatomically aligned animal species data (37.4% ± 1.5%) showed an even larger decrement relative to BSC of hyperaligned data than that found for the face and object perception data (p < 0.001). We next asked how many dimensions are necessary to capture the information that enables these high levels of BSC accuracy (Figure 1). We performed a principal components analysis (PCA) of the mean responses to each movie time point in common model space, averaging across subjects, then performed BSC of the movie, face and object, and animal
species data with varying numbers of top principal components (PCs). The results Idoxuridine show that BSC accuracies for all three data sets continue to increase with more than 20 PCs (Figure 3A). We present results for a common model space with 35 dimensions, which affords BSC classification accuracies that are equivalent to BSC accuracies using all 1,000 original dimensions (68.3% ± 2.6% versus 70.6% ± 2.6% for movie time segments; 64.8% ± 2.3% versus 63.9% ± 2.2% for faces and objects; 67.6% ± 3.1% versus 68.0% ± 2.8% for animal species; Figure 2A). The effect of number of PCs on BSC was similar for models that were based only on Princeton (n = 10) or Dartmouth (n = 11) data, suggesting that this estimate of dimensionality is robust across differences in scanning hardware and scanning parameters (see Figure S3D). We next asked whether the information necessary for classification of stimuli in the two category perception experiments could be captured in smaller subspaces and whether these subspaces were similar.