For Free

23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS), PART 1

23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS)
In the 23rd Minitab tutorial, we are in the final assembly department of Smartboard Company. Here in the early late and night shifts, all the individual skateboard components are assembled into a finished skateboard and subjected to a final visual surface inspection, before dispatch to the customer. Depending on their visual appearance, the skateboards receive corresponding integer quality grades from 1 to 5, without intermediate grades. Grade 1 indicates a damage-free very good skateboard, up to grade 5 for skateboards with very severe surface damage. One surface appraiser is available for visual quality control in each production shift, so that in three production shifts a total of three different surface appraisers rate the skateboards with quality grades 1 to 5. The core of this Minitab tutorial will be, to check whether all three appraisers have a high level of repeatability in their own assessments, and whether all three appraisers have a sufficiently identical understanding of customer quality. And finally, it is important to check whether the team of appraisers as a whole has the same understanding of quality as the customer. In contrast to the previous training unit, in which only the two binary answer options „good“ and „bad“ were possible, in this training unit we are dealing with an appraiser agreement analysis, in which five answers are possible, which then also have a different value in relation to each other. For example, a score of 1 for very good has a completely different qualitative value, than a score of 5 for poor.

Before we get into the attributive agreement analysis, we will first learn how to create a measurement protocol layout, if characteristic carriers are available in an ordered sequence of values. We will then move into analyzing the appraiser matches with the complete data set, and learn how to evaluate the match test within the appraisers. To assess the appraiser agreements, we will also learn how we can use the so-called Fleiss-Kappa statistics, in addition to the classic match rates in percent, in order to derive a statement about the expected future match rate, with a correspondingly defined probability of error. We will then get to know the very important so-called Kendall concordance coefficient, which – in contrast to the Kappa value – not only provides an absolute statement as to whether there is a match, but can also provide a statement about the severity of the wrong decisions through a relative consideration of the deviations. With this knowledge, we will then also be able to assess the agreement rate of the appraisers’ assessments in comparison to the customer standard, and also find out how we can use the corresponding quality criteria to work out how often the appraisers were of the same opinion, i.e., whether the appraisers have the same understanding of quality. In addition to the Kendall concordance coefficient, we will also get to know the so-called Kendall correlation coefficient, which helps us to obtain additional information about whether for example, a appraiser tends to make less demanding judgments and therefore undesirably classifies a skateboard, that is inadequate from the customer’s point of view as a very good skateboard.

MAIN TOPICS MINITAB TUTORIAL 23, part 1

  • Create Measurement report layout for ordered value levels
  • Agreement analysis within the appraisers by using the Fleiss-Kappa statistic
  • Derivation of the Kendall coefficient of the concordance
  • Agreement analysis within the appraisers by using the Kendall concordance

MAIN TOPICS MINITAB TUTORIAL 23, part 2

  • Derivation of the Kendall correlation coefficient
  • Appraiser agreements analysis compared to the customer standard

MAIN TOPICS MINITAB TUTORIAL 23, part 3

  • Agreement analysis by using the Kendall correlation coefficient
  • Graphical evaluation of Appraisers repeatability and reproducibility