Assessment and Comparison of Continuous Measurement Systems
dc.contributor.author | Stevens, Nathaniel | |
dc.date.accessioned | 2014-12-10T18:55:47Z | |
dc.date.available | 2014-12-10T18:55:47Z | |
dc.date.issued | 2014-12-10 | |
dc.date.submitted | 2014 | |
dc.description.abstract | In this thesis we critically examine the assessment and comparison of continuous measurement systems. Measurement systems, defined to be the devices, people, and protocol used to make a measurement, are an important tool in a variety of contexts. In manufacturing contexts a measurement system may be used to monitor a manufacturing process; in healthcare contexts a measurement system may be used to evaluate the status of a patient. In all contexts it is desirable for the measurement system to be accurate and precise, so as to provide high-quality and reliable measurements. A measurement system assessment (MSA) study is performed to assess the adequacy, and in particular the variability (precision), of the measurement system. The Automotive Industry Action Group (AIAG) recommends a standard design for such a study in which 10 subjects are measured multiple times by each individual who operates the measurement system. In this thesis we propose alternate study designs which, with little extra effort, provide more precise evaluations of the measurement system’s performance. Specifically, we propose the use of unbalanced augmented plans which, by strategically using more subjects and fewer replicate measurements, are substantially more efficient and more informative than the AIAG recommendation. We consider cases when the measurement system is operated by just one individual (or is automated), and when the measurement system is operated by multiple individuals, and in all cases, augmented plans are superior to the typical designs recommended by the AIAG. In situations where the measurement system is used routinely, and records of these single measurements on many subjects are kept, we propose incorporating this additional ‘baseline’ information into the planning and analysis of an MSA study. Once again we consider the scenarios in which the measurement system is operated by a single individual, or multiple individuals. In all cases incorporating baseline information in the planning and analysis of an MSA study substantially increases the amount of information about subject-to-subject variation. This in turn allows for a much more precise assessment of the measurement system than is possible with the designs recommended by the AIAG. Often new measurement systems that are less expensive, require less man-power, and are perhaps less time-consuming, are developed. In these cases, potential customers may wish to compare the new measurement system with their existing one, to ensure that the measurements by the new system agree suitably with the old. This comparison is typically done with a measurement system comparison (MSC) study, in which a number of randomly selected subjects are measured one or more times by each system. A variety of statistical techniques exist for analyzing MSC study data and quantifying the agreement between the two systems, but none are without challenges. We propose the probability of agreement, a new method for analyzing MSC data, which more effectively and transparently quantifies the agreement between two measurement systems. The chief advantage of the probability of agreement is that it is intuitive and simple to interpret, and its interpretation is the same no matter how complicated the setting. We illustrate its applicability, and its superiority to existing techniques, in a variety of settings and we also make recommendations for a study design that facilitates precise estimation of this probability. | en |
dc.identifier.uri | http://hdl.handle.net/10012/8976 | |
dc.language.iso | en | en |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | measurement system assessment | en |
dc.subject | measurement system comparison | en |
dc.subject | augmented plan | en |
dc.subject | baseline data | en |
dc.subject | repeatability | en |
dc.subject | reproducibility | en |
dc.subject | bias | en |
dc.subject | probability of agreement | en |
dc.subject.program | Statistics | en |
dc.title | Assessment and Comparison of Continuous Measurement Systems | en |
dc.type | Doctoral Thesis | en |
uws-etd.degree | Doctor of Philosophy | en |
uws-etd.degree.department | Statistics and Actuarial Science | en |
uws.peerReviewStatus | Unreviewed | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |