Evaluating agreement between methods or observers is important in method comparison study and reliability study. Often we are interested in whether a new method can replace an existing invasive or expensive method, or in whether the multiple methods/observers can be used interchangeably at individual level. Intuitively, if individual measurements from these methods are similar to the replicated measurements within a method, then the methods have good individual agreement and thus it may be justifiable to replacing one method by the other. In this talk, I will present coefficient of individual agreement (CIA) to assess individual agreement between multiple methods for cases with and without a reference. I will identify research topics that need future investigation in the area of assessing agreement.
A diagnostic test attempts to ascertain presence or absence of disease. Performance of a dichotomous test is often summarized by sensitivity and specificity values. A Receiver Operating Characteristic (ROC) curve and area under the curve (AUC) are advocated as measures of test performance if the underlying test measure is continuous. This talk will discuss the approaches to estimate sensitivity, specificity, ROC, and AUC in partially missing gold standard situation. This talk will also discuss possible approach to estimation of the above measures of test performance when test underlying measure is continuous and the full range of observations is not available or observations are sparse.
Return to Biostatistics Working Group