The ideal biomarker has high diagnostic specificity and sensitivity and/or is a good predictor of outcome. It
is therefore important to search for imaging parameters that show high variability between the clinical phenotypes of interest (e.g., diagnostic groups or treatment effects) but should not be influenced by random variability produced by differences in imaging hardware or software or by intraindividual variability that is not related to the clinical state. Although imaging methods are being developed Doxorubicin to the standard required for biomarker research (Table 1), at the present time there does not appear to be a single neuroimaging parameter of biomarker quality to distinguish patients with a particular mental disorder from controls (let alone to distinguish between different mental disorders, which is arguably the clinically more relevant
question). In the following sections I will discuss some fruitful avenues for identifying reliable biomarkers and the challenges inherent in these buy Antidiabetic Compound Library promising approaches. If single neuroimaging parameters have largely failed the biomarker test, perhaps combining different measures either from a single or several imaging modalities in a multivariate analysis will yield higher diagnostic accuracy. The basic idea behind the pattern classification approaches in neuroimaging is that the key differences between groups (e.g., patient versus control) or states (e.g., symptomatic versus remitted) may lie in the relationship between different parameters, for example the relative activation levels in different areas of the brain. Most neuroimaging pattern classification studies start from a very large number of features, up to the hundreds of thousands of voxels that can be captured in high resolution experiments (feature extraction, see Figure 1). These
data are fed into a classifier algorithm, for example a support vector machine (SVM). This algorithm then finds the optimal separation between the two or more classes in question (task conditions or diagnostic groups). Classifiers can be trained to any level of accuracy, but their predictive performance will vary old based on the quality of the data and the number of parameters needed. The accuracy of the prediction needs to be tested on new cases that are different from the training set. The classifier assigns a label to each of the new cases, for example “group 1” versus “group 2” (Figure 1), and these labels are compared with the “real” diagnosis or a known outcome. With the small sample sizes used in MVPA classification studies thus far, this has commonly been achieved with cross-validation procedures such as the “leave one out procedure,” where the classifier is trained on all cases but one and then tested for accurate classification of the remaining case.