The diagnostic performance of a test, or the accuray of a test to discriminate diseased cases from normal cases is evaluated using Receiver Operating Characteristic (ROC) curve analysis (Metz, 1978; Zweig & Campbell, 1993). ROC curves can also be used to compare the diagnostic performance of two or more laboratory or diagnostic tests (Griner et al., 1981).
When you consider the results of a particular test in two populations, one population with a disease, the other population without the disease, you will rarely observe a perfect separation between the two groups. Indeed, the distribution of the test results will overlap, as shown in the following figure.
For every possible cutoff point or criterion value you select to discriminate between the two populations, there will be some cases with the disease correctly classified as positive (TP = True Positive fraction), but some cases with the disease will be classified negative (FN = False Negative fraction). On the other hand, some cases without the disease will be correctly classified as negative (TN = True Negative fraction), but some cases without the disease will be classified as positive (FP = False Positive fraction).
Schematic outcomes of a test
The different fractions (TP, FP, TN, FN) are represented in the following table.
Disease  
Test  Present  n  Absent  n  Total  
Positive  True Positive (TP)  a  False Positive (FP)  c  a + c  
Negative  False Negative (FN)  b  True Negative (TN)  d  b + d  
Total  a + b  c + d 
The following statistics can be defined:
Sensitivity 
 Specificity 
 
Positive Likelihood Ratio 
 Negative Likelihood Ratio 
 
Positive Predictive Value 
 Negative Predictive Value 

 Sensitivity: probability that a test result will be positive when the disease is present (true positive rate, expressed as a percentage).
= a / (a+b)  Specificity: probability that a test result will be negative when the disease is not present (true negative rate, expressed as a percentage).
= d / (c+d)  Positive likelihood ratio: ratio between the probability of a positive test result given the presence of the disease and the probability of a positive test result given the absence of the disease, i.e.
= True positive rate / False positive rate = Sensitivity / (1Specificity)  Negative likelihood ratio: ratio between the probability of a negative test result given the presence of the disease and the probability of a negative test result given the absence of the disease, i.e.
= False negative rate / True negative rate = (1Sensitivity) / Specificity  Positive predictive value: probability that the disease is present when the test is positive (expressed as a percentage).
= a / (a+c)  Negative predictive value: probability that the disease is not present when the test is negative (expressed as a percentage).
= d / (b+d)
Sensitivity and specificity versus criterion value
When you select a higher criterion value, the false positive fraction will decrease with increased specificity but on the other hand the true positive fraction and sensitivity will decrease:
When you select a lower criterion value, then the true positive fraction and sensitivity will increase. On the other hand the false positive fraction will also increase, and therefore the true negative fraction and specificity will decrease.
The ROC curve
In a Receiver Operating Characteristic (ROC) curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (100Specificity) for different cutoff points. Each point on the ROC plot represents a sensitivity/specificity pair corresponding to a particular decision threshold. A test with perfect discrimination (no overlap in the two distributions) has a ROC plot that passes through the upper left corner (100% sensitivity, 100% specificity). Therefore the closer the ROC plot is to the upper left corner, the higher the overall accuracy of the test (Zweig & Campbell, 1993).
No comments:
Post a Comment