For Free

22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART), PART 1

22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART)
In the 22nd Minitab tutorial, we accompany the final inspection station of Smartboard Company. Here the skateboards assembled in the early late and night shifts, are subjected to a final visual surface inspection before being shipped to the customer, and declared as a good or bad part depending on the amount of surface scratches. Skateboards with a „GOOD“ rating are sent to the customer, while skateboards with a „BAD“ rating have to be scrapped at great expense. One employee is available for the visual surface inspection in each production shift, so that in three production shifts a total of three different surface appraisers classify the skateboards as „GOOD“ and „BAD“. Our task in this Minitab tutorial will be to check whether all three appraisers have an identical understanding of quality, with regard to repeatability and reproducibility, in their quality assessments. In contrast to the previous training units, however, in this training unit we are no longer dealing with continuously scaled quality assessments, but with the attributive quality assessments „good part“ and „bad part“.  Before we get into the measurement system analysis required for this, we will first get an overview of the three important scale levels nominal, ordinal, and cardinal scale. And create the useful measurement protocol layout function, for our agreement check. We will then use the complete data set to analyze the appraiser matches, and evaluate the corresponding match rates using the so-called Fleiss Kappa statistic, and the corresponding Kappa values. We will actively calculate the principle of the so-called Kappa statistic, or Cohen’s Kappa statistic, by using a simple data set and understand how the corresponding results appear in the output window. We will learn how the Kappa statistic helps us to obtain a statement for example, about the probability that a match rate achieved by the appraisers could also have occurred at random. We will first learn to evaluate the agreement rate within the appraisers using Kappa statistics and then see how the final inspection team also uses Kappa statistics to evaluate the agreement of the appraisers’ assessments with the customer requirement. And we will be able to find out whether an inspector tends to declare actual bad parts as good parts or vice versa. After the compliance tests in relation to the customer standard, we will then examine how often not only the appraisers agreed with each other, but also how well the Agreement rate of the appraiser team as a whole can be classified in relation to the customer standard. With these findings, we will then be in a position to make appropriate recommendations for action, for example to achieve a uniform understanding of quality in line with customer requirements as part of appraiser training. In this context, we will also become familiar with the two very useful graphical forms of presentation Agreement of assessments within the appraisers, and Appraisers compared to the standard. We can verify that these graphs are always very helpful especially in day-to-day business, for example to get a quick visual impression of the most important information regarding our attributive agreement analysis results.

MAIN TOPICS MINITAB TUTORIAL 22, part 1

  • Scale levels, fundamentals
  • Nominal, ordinal, cardinally scaled data types
  • Discrete versus continuous data

 

MAIN TOPICS MINITAB TUTORIAL 22, part 2

  • Sample size for attributive MSA according to AIAG
  • Appraiser agreement rate for attributive data, principle
  • Create measurement report layout for appraiser agreement
  • Performing the appraiser agreement analysis for attributive data
  • Analysis of agreement rate within the appraisers
  • Fleiss-Kappa and Cohen-Kappa statistics

 

MAIN TOPICS MINITAB TUTORIAL 22, part 3

  • Analysis of appraiser versus standard compliance
  • Assessment of appraiser agreement based on the Fleiss-Kappa statistic
  • Kappa statistic for assessing the coincidental match rate
  • Analysis of the mismatch of the appraiser
  • Graphical MS analysis within the appraisers
  • Graphical MS appraiser analysis compared to the customer standard