The technical-didactic TABTRAINER® learning content has been developed by Prof. Dr. Mola and is based on his AQAS and ISO 13053 accredited Six Sigma university teaching modules in the Mechanical Engineering and Technical Production Management courses at the University of Applied Sciences HRW, Germany.

His dissertation in the field of lean materials was awarded the highest possible distinction award **summa cum laude**.

He was awarded with the prestigious title „**Professor of the Year 2023**„. The award has been organized as an annual, nationwide competition since 2006 and is under the sponsorship of the German Federal Ministry of Education and Research and the Federal Ministry of Economics and Climate Protection.

**Contact for six sigma teaching and training: info@sixsigmapro.de**

Content:

- 01 MINITAB® USER INTERFACE
- 02 TIME SERIES ANALYSIS
- 03 BOXPLOT ANALYSIS
- 04 PARETO ANALYSIS
- 05.1 t-TEST, 1-SAMPLE, PART 1
- 05.1 t-TEST, 1-SAMPLE, PART 2
- 05.1 t-TEST, 1-SAMPLE, PART 3
- 06 t-TEST- 2 SAMPLES
- 07 t-TEST, PAIRED SAMPLES
- 08 TEST FOR PROPORTIONS, BINOMIAL DISTRIBUTION
- 09 CHI-SQUARE TEST FOR PROPORTIONS
- 10 ONE-SAMPLE POISSON RATE
- 11 ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
- 12 GENERAL LINEAR MODEL: 2-WAY ANOVA
- 13 BLOCKED ANOVA
- 14 SIMPLE CORRELATION AND SIMPLE REGRESSION
- 15 MULTIPLE CORRELATION ANALYSIS, MATRIX PLOT
- 16 POLYNOMIAL REGRESSION
- 17 POLYNOMIAL REGRESSION WITH BACKWARD ELIMINATION
- 18 GAGE R&R STUDY CROSSED, PART 1
- 18 GAGE R&R STUDY CROSSED, PART 2
- 18 GAGE R&R STUDY CROSSED, PART 3
- 19 CROSSED MSA STUDY, NON-VALID
- 20 NESTED MSA GAGE R&R STUDY, PART 1
- 20 NESTED MSA GAGE R&R STUDY, PART 2
- 21 MEASUREMENT SYSTEM ANALYSIS: STABILITY AND LINEARITY
- 22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART), PART 1
- 22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART), PART 2
- 22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART), PART 3
- 23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS), PART 1
- 23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS), PART 2
- 23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS), PART 3
- 24 CONTROL CHARTS, CONTINUOUS DATA, PART 1
- 24 CONTROL CHARTS, CONTINUOUS DATA, PART 2
- 24 CONTROL CHARTS, CONTINUOUS DATA, PART 3
- 25 PROCESS STABILITY ATTRIBUTIVE DATA: P-, NP-, P`- CHART
- 26 PROCESS STABILITY ATTRIBUTIVE DATA: U-, C- CHART
- 27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED, PART 1
- 27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED, PART 2
- 27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED, PART 3
- 28 PROCESS CAPABILITY, NOT NORMALLY DISTRIBUTED
- 29 PROCESS CAPABILITY, BINOMIALLY DISTRIBUTED
- 30 PROCESS CAPABILITY, POISSON DISTRIBUTED
- 31 DOE FULL FACTORIAL, 3 PREDICTORS, PART 1
- 31 DOE FULL FACTORIAL, 3 PREDICTORS, PART 2
- 31 DOE FULL FACTORIAL, 3 PREDICTORS, PART 3
- 31 DOE FULL FACTORIAL, 3 PREDICTORS, PART 4
- 32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS, PART 1
- 32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS, PART 2
- 32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS, PART 3
- 32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS, PART 4
- 33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS, PART 1
- 33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS, PART 2
- 33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS, PART 3
- 34 DOE: RESPONSE SURFACE DESIGN, PART 1
- 34 DOE: RESPONSE SURFACE DESIGN, PART 2

In our 1st Minitab tutorial we will first familiarize ourselves with the Minitab user interface before we get into the actual data analyses, and experience how we can import, export and securely save Minitab file types. Once we have familiarized ourselves with the interface areas such as navigation, output and history, we will use a typical Minitab data set to see how a typical Minitab project can be structured. We will also take a closer look at the structure of the worksheets and learn how data worksheets are structured and understand how different data types are indexed in the worksheets. We will also learn how to integrate our own comments and text notes into the worksheets.

MAIN TOPICS

- Working in the navigation-, output-, and history window
- Data import from different sources
- Set specific default storage locations for projects
- Indexing of data types
- Customize surface view
- Add comments and notes in text form
- Save and close worksheets and projects

In the second Minitab tutorial, we will accompany the quality management team of Smartboard Company, and experience how the scrap rate of the last financial years is analyzed step by step with the help of a time series analysis. In this Minitab tutorial we will experience how data preparation and analysis is carried out by using the example of time series plots and get to know the different types of time series charts. We will learn about the useful timestamp function, and how to extract subsets from a worksheet. With the so-called useful brush function, we will know how to specifically analyze individual data clusters in the edit mode of a graph. The useful calendar function, will also be covered as part of our data preparation. We will also understand, what so-called identification variables are, and what benefits they have for our data analysis. We will see for example, how we can export our analysis results to PowerPoint or Word for team meetings with little effort.

MAIN TOPICS Minitab Tutorial 02

- Time series plot, fundamentals
- Row and column capacity of Minitab worksheets
- Interpret import preview window
- Update file names
- Extract dates
- Design and structure of time series plots
- Working with the timestamp
- Extracting data
- Form subset of worksheets
- Extract weekdays from dates
- Highlighting data points in the edit mode of a graphic
- Examination of data clusters using the brush function
- Defining identification variables
- Export of Minitab analysis results to MS Office applications

**03 BOXPLOT ANALYSIS
**In the third Minitab tutorial we will accompany the quality improvement project of Smartboard Company, to analyze the scrap rate of the last fiscal year’s by using a boxplot analysis. In this Minitab tutorial, we will understand how the team compares different data groups to find out for example, whether more or less scrap was generated on certain production days, than on the other production days. In this context, we will learn what a boxplot is, how it is structured in principle, and what useful information this tool provides. In this Minitab tutorial, we will also discover that the particular advantage of a boxplot analysis is, that this graphical form of presentation allows us to compare statistical parameters such as median, arithmetic mean values, and minimum and maximum values of different data sets, in a compact and clear way. We will also learn how to include additional information elements into the data display, carry out a hypothesis test for data outliers, and create and interpret a single value diagram. The automation of recurring analyses by using of so-called macros, which is particularly useful in day-to-day business will also be introduced. Finally, we will create a personalized button in the menu bar, to perform recurring daily analysis routines – for example for the daily quality report – with a simple click in order to save time in the turbulent day-to-day business.

MAIN TOPICS MINITAB TUTORIAL 03

- Boxplot analysis, fundamentals
- Basic structure and interpretation of boxplots
- Quantiles, quartiles, medians and arithmetic means in boxplots
- Display of data outliers in the boxplot
- Boxplots for an even data set
- Boxplots for an odd data set
- Boxplot types in comparison
- Create and interpret boxplot
- Working in boxplot editing mode
- Hypothesis test for outliers according to Grubbs
- Generating and interpreting single value charts
- Automation of analyses with the help of macros
- Creating an individual button in the menu bar

In the fourth Minitab tutorial, we will see how the quality team at Smartboard Company uses a Pareto analysis to examine its delivery performance. In this Minitab tutorial, we will first understand how a Pareto chart is basically structured, and what information it provides us with. As part of our Pareto analysis, we will then get to know a number of other useful options, for example to quickly retrieve dispersion and location parameters or how we can perform arithmetic operations using the calculator function. We will learn, how to generate a pie chart and a histogram, and also understand for example, how the class formation is calculated in the histogram. As part of the Pareto analysis, we will also learn, how to assign continuously scaled data to categorical value ranges, and how we can individually code these value ranges to create the Pareto diagram. In addition, we will also learn about data extraction by using corresponding function commands.

MAIN TOPICS MINITAB TUTORIAL 04

- Pareto analysis, fundamentals
- Retrieving and interpreting worksheet information
- Working with the calculator function
- Display of missing values in the data set
- Retrieving location and dispersion/ dispersion parameters using descriptive statistics
- Comparison of histogram types
- Creating and interpreting histograms and pie charts
- Calculation of the number and width of bars in the histogram
- Creating and interpreting a Pareto chart
- Creating specific value ranges in the Pareto chart
- Recoding of continuously scaled Pareto value ranges into categorical value ranges
- Targeted extraction of individual information from data cells
- Creation of Pareto diagrams based on different categorical variables

**05 t-TEST, 1-SAMPLE
**In the fifth Minitab tutorial, we will accompany the heat treatment process of the skateboard axles and see how the so-called hypothesis test, t-test for one sample, can be used to find out whether the heat treatment process is set so that the skateboard axles achieve the required compressive strength. To achieve this, the skateboard axles undergo a multi-stage heat treatment process. Our task in this Minitab tutorial is, to use sample data and the hypothesis test t-test for one sample, to make a reliable recommendation to the production management, as to whether the current heat treatment process is sufficiently well adjusted, or whether the current process might even need to be stopped and optimized if our hypothesis test shows, that the required mean target value is not being achieved. In the core of this Minitab tutorial, we will experience how a hypothesis test is properly carried out on a sample, and check in advance whether our data set follows the laws of normal distribution. With the help of a so-called discriminatory power analysis, we will work out whether the sample size is large enough. By using the so-called density function and probability distribution plot, we will learn how to classify the t-value in the t-test, and also understand which wrong decisions are possible in the context of a hypothesis test. With the help of corresponding individual value plot, we will develop an understand the so-called confidence interval, in the context of the one-sample t-test.

MAIN TOPICS MINITAB TUTORIAL 05, part 1

- t-test, one sample, fundamentals
- Retrieving and interpreting worksheet information
- Retrieving and interpreting descriptive statistics
- The discriminatory power of a hypothesis test
- Anderson- Darling test as a preliminary stage to the t-test
- Derivation of the probability plot based on the density function
- Interpretation of the probability plot
- Performance of the hypothesis test t-test for the mean value, 1 sample
- Establishing the null hypothesis and alternative hypothesis
- Test for normal distribution according to Anderson-Darling
- The probability plot of the normal distribution

MAIN TOPICS MINITAB TUTORIAL 05, part 2

- Generation and interpretation of individual value plot as part of the t-test
- Confidence level and probability of error
- Test sample size and significance value
- Type 1 error and type 2 error in the context of the hypothesis decision

MAIN TOPICS MINITAB TUTORIAL 05, part 3

- Power analysis and sample size in the t-test
- Influence of the sample size on the hypothesis result
- Graphical construction of the probability distribution
- Interpretation of the power curve
- Determination of the sample size based on the discrimination quality
- Influence of different sample sizes on the hypothesis decision

**05 t-TEST, 1-SAMPLE
**In the fifth Minitab tutorial, we will accompany the heat treatment process of the skateboard axles and see how the so-called hypothesis test, t-test for one sample, can be used to find out whether the heat treatment process is set so that the skateboard axles achieve the required compressive strength. To achieve this, the skateboard axles undergo a multi-stage heat treatment process. Our task in this Minitab tutorial is, to use sample data and the hypothesis test t-test for one sample, to make a reliable recommendation to the production management, as to whether the current heat treatment process is sufficiently well adjusted, or whether the current process might even need to be stopped and optimized if our hypothesis test shows, that the required mean target value is not being achieved. In the core of this Minitab tutorial, we will experience how a hypothesis test is properly carried out on a sample, and check in advance whether our data set follows the laws of normal distribution. With the help of a so-called discriminatory power analysis, we will work out whether the sample size is large enough. By using the so-called density function and probability distribution plot, we will learn how to classify the t-value in the t-test, and also understand which wrong decisions are possible in the context of a hypothesis test. With the help of corresponding individual value plot, we will develop an understand the so-called confidence interval, in the context of the one-sample t-test.

MAIN TOPICS MINITAB TUTORIAL 05, part 1

- t-test, one sample, fundamentals
- Retrieving and interpreting worksheet information
- Retrieving and interpreting descriptive statistics
- The discriminatory power of a hypothesis test
- Anderson- Darling test as a preliminary stage to the t-test
- Derivation of the probability plot based on the density function
- Interpretation of the probability plot
- Performance of the hypothesis test t-test for the mean value, 1 sample
- Establishing the null hypothesis and alternative hypothesis
- Test for normal distribution according to Anderson-Darling
- The probability plot of the normal distribution

MAIN TOPICS MINITAB TUTORIAL 05, part 2

- Generation and interpretation of individual value plot as part of the t-test
- Confidence level and probability of error
- Test sample size and significance value
- Type 1 error and type 2 error in the context of the hypothesis decision

MAIN TOPICS MINITAB TUTORIAL 05, part 3

- Power analysis and sample size in the t-test
- Influence of the sample size on the hypothesis result
- Graphical construction of the probability distribution
- Interpretation of the power curve
- Determination of the sample size based on the discrimination quality
- Influence of different sample sizes on the hypothesis decision

**05 t-TEST, 1-SAMPLE
**In the fifth Minitab tutorial, we will accompany the heat treatment process of the skateboard axles and see how the so-called hypothesis test, t-test for one sample, can be used to find out whether the heat treatment process is set so that the skateboard axles achieve the required compressive strength. To achieve this, the skateboard axles undergo a multi-stage heat treatment process. Our task in this Minitab tutorial is, to use sample data and the hypothesis test t-test for one sample, to make a reliable recommendation to the production management, as to whether the current heat treatment process is sufficiently well adjusted, or whether the current process might even need to be stopped and optimized if our hypothesis test shows, that the required mean target value is not being achieved. In the core of this Minitab tutorial, we will experience how a hypothesis test is properly carried out on a sample, and check in advance whether our data set follows the laws of normal distribution. With the help of a so-called discriminatory power analysis, we will work out whether the sample size is large enough. By using the so-called density function and probability distribution plot, we will learn how to classify the t-value in the t-test, and also understand which wrong decisions are possible in the context of a hypothesis test. With the help of corresponding individual value plot, we will develop an understand the so-called confidence interval, in the context of the one-sample t-test.

MAIN TOPICS MINITAB TUTORIAL 05, part 1

- t-test, one sample, fundamentals
- Retrieving and interpreting worksheet information
- Retrieving and interpreting descriptive statistics
- The discriminatory power of a hypothesis test
- Anderson- Darling test as a preliminary stage to the t-test
- Derivation of the probability plot based on the density function
- Interpretation of the probability plot
- Performance of the hypothesis test t-test for the mean value, 1 sample
- Establishing the null hypothesis and alternative hypothesis
- Test for normal distribution according to Anderson-Darling
- The probability plot of the normal distribution

MAIN TOPICS MINITAB TUTORIAL 05, part 2

- Generation and interpretation of individual value plot as part of the t-test
- Confidence level and probability of error
- Test sample size and significance value
- Type 1 error and type 2 error in the context of the hypothesis decision

MAIN TOPICS MINITAB TUTORIAL 05, part 3

- Power analysis and sample size in the t-test
- Influence of the sample size on the hypothesis result
- Graphical construction of the probability distribution
- Interpretation of the power curve
- Determination of the sample size based on the discrimination quality
- Influence of different sample sizes on the hypothesis decision

**06 t-TEST-2 SAMPLES
**In the 6th Minitab tutorial, we are in the product purchase department of Smartboard Company, and we will experience how the team uses the so-called, two-sample t-test, to evaluate the delivery quality of different screw suppliers. Smartboard Company previously purchased its screws from just one screw supplier. We need to know, that due to the current increase in demand, there have been repeated supply bottlenecks at the screw supplier, which has meant that skateboard production has had to be stopped due to a lack of screws. For this reason, Smartboard Company has recently started to be supplied by another screw manufacturer in addition to the previous supplier, in order to avoid future production bottlenecks. From a quality point of view, it is therefore very important that the mechanical properties of the screws from the new supplier, do not differ significantly from those of the previous supplier. Our task in this Minitab tutorial unit will be, to use the so-called, t-test for two samples, to work out whether there are significant differences in the strength properties between the two suppliers. We will also discuss a number of other useful functions and topics, as part of our two-sample t-test, as well as using dot plots and box plots, and learning about the useful „summary report“ function in this context. We will understand what is meant by the statistical quality parameters kurtosis and skewness, of a data landscape. We will carry out a variance test for two samples, in order to carry out a variance comparison of our data sets, in addition to the mean value comparison. For the variance test, we will familiarize ourselves with the Bonett and Levene procedures, in order to understand how to properly carry out the corresponding hypotheses in the variance test, and interpret them in an understandable way. Finally, we will get to know the so-called, layout tool, in order to summarize the most important analysis graphs and plots together, in one graphical layout.

MAIN TOPICS MINITAB TUTORIAL 06

- t-test, 2 samples
- Carrying out the discrimination power analysis to determine the sample size
- Basic mathematical idea of degrees of freedom
- Performance and interpretation of the t-test for two samples
- Formulation of the null hypothesis and alternative hypothesis
- Box plot „Multiple Y, simple“ as part of the t-test
- Rescaling of dot plots
- Creation and interpretation of boxplots of the t-test
- Working with the „Graphical summary“ option
- Quality parameters kurtosis and skewness of the data distribution
- Test for variances, 2 samples
- Significance values in the variance test according to Bonett & Levene
- Working with the „Layout Tool“

** **

**07 t-TEST, PAIRED SAMPLES
**In the 7th Minitab tutorial, we find ourselves in the development department of Smartboard Company. As part of the manufacture of prototypes, Smartboard Company has developed two new high-performance materials made of stainless-steel powder. The powder form is necessary because the skateboard axles for the professional sector are no longer to be manufactured from die-cast aluminum as before, but using the SLM production method. SLM stands for „selective laser melting“ and is currently one of the most innovative rapid prototyping processes. In this process the stainless-steel powder is completely remelted using a laser beam and applied in three-dimensional layers under computer control. The layer-by-layer application takes place in several cycles, with the next layer of powder being remelted and applied after each solidified layer until the 3D printing of the axis is complete. During prototype development, the team worked intensively on optimizing the stainless-steel powder used for 3D printing. Accordingly, our core task in this training session will be, to find out which of these two types of stainless-steel powder has the better i.e., higher toughness properties on the basis of a random sample, and a suitable hypothesis test. The core technological parameter of toughness is determined in the unit joule and is a measure of the resistance to axle breakage or crack spreading in the skateboard axles under impact load. According to the research director’s specifications, we are to draw an indirect conclusion about the production population on the basis of a random sample, and make a 95% reliable recommendation as to whether the average toughness’s differ from each other by at least 10 joules. In the course of this training session, we will guide the quality team in finding out which material has the better toughness properties for the skateboard axles by using the so-called hypothesis test, t-test for paired samples. And in the further course of this Minitab training, we will also experience that this time we are dealing with more than two production populations. We will learn why the paired-sample t-test is the method of choice in such cases rather than the classic two-sample t-test. We will take a closer look at the formula for the t-value in the paired sample t-test, calculate the most important parameters, and compare them with the values in the output window. In this context, we will again work with the useful calculator function to determine the relevant parameters. Finally, we will perform the t-test for paired samples by using the so-called Minitab Assistant, which is particularly useful in turbulent day-to-day business to perform the correct calculations with useful decision questions.

MAIN TOPICS MINITAB TUTORIAL 07

- t-test, paired samples
- Derivation of the test statistic in the t-test for paired samples
- Formulation of the null hypothesis and alternative hypothesis
- t-test, paired samples versus t-test, 2 samples
- Interpretation of the hypothesis test results
- Working with the calculator function
- Working with the „Minitab Assistant“

**08 TEST FOR PROPORTIONS, BINOMIAL DISTRIBUTION
**In the 8th Minitab tutorial, we are in the skateboard deck production shop floor. Here in a fully automated production process, several layers of wood are pressed together under high pressure with water-based glue and special epoxy resin to form a skateboard deck. Our task in this Minitab tutorial will be to use the so-called hypothesis test, t-test for proportions, to draw a statistically indirect conclusion about the production population, in order to be able to make a statement about the error rate of the production population, with 95% certainty on the basis of the sample. For this purpose, we will subject a certain quantity of randomly selected skateboard decks to a visual surface inspection, and depending on their visual appearance, assess the skateboard decks as good or bad parts in terms of customer requirements. Good parts would be skateboard decks that meet all visual customer requirements, and correspondingly bad parts would be skateboard decks, that do not meet the visual customer requirements, and would either have to be subsequently repaired at great expense, or scrapped. Our task in the first step will be, to define the boundary conditions for the corresponding weekly sample, in order to draw an indirect conclusion in the second step, based on the error rate of our sample to the error rate in the population by using the correct hypothesis test. The special feature in this training unit will be, that we will be dealing with categorical quality judgments, in which the laws of binomial distribution rather than Gaussian normal distribution are applied. In this Minitab tutorial we will therefore get to know this binomial distribution in more detail, and also carry out the associated discriminatory power analysis for binomially distributed data, in order to determine the appropriate sample size. With this necessary preliminary work, we can then properly perform the so-called hypothesis test, test for proportions, in order to be able to make a 95% reliable recommendation for action to the management of Smartboard Company, based on our sample test results, which relates to the basic production population.

MAIN TOPICS MINITAB TUTORIAL 08

- 1-Sample t-test for proportions
- Understanding the binomial distribution
- Deriving the probability distribution of the binomial distribution
- Perform discriminatory power analysis for binomially distributed data
- Normal approximation in the context with the binomial distribution
- Working with the „tally individual variables“ function
- Sample size as a function of the discrimination quality
- Formulation of the null hypothesis and alternative hypothesis

**09 CHI-SQUARE TEST FOR PROPORTIONS
**In the 9th Minitab tutorial, we are in the injection molding production of Smartboard Company for the production of skateboard wheels. Skateboard wheels are manufactured by using the injection molding process, for which the technical plastic, polyurethane is used. In the first step, the starting material in the form of polyurethane granulate is thermally liquefied in the injection molding system. In the second step, the liquid polyurethane is then injected into the corresponding mold at high pressure until the mold is completely filled. In the third step, the liquid polyurethane is cooled by high-pressure water cooling. After cooling and solidification, the finished skateboard wheels are automatically ejected from the injection mold in the fourth step, and the mold is released for the next wheel production. Large-scale injection molding production at Smartboard Company is carried out in three shifts, so that the required high quantities can be produced in early late and night shifts, and delivered to customers on time. For some time now however, an increasing number of skateboard wheels have had to be scrapped due to various surface defects. It was therefore decided to launch a quality improvement project, to identify the causes of the increased defect rates. Our central task in this Minitab tutorial will be, to answer the following two key questions on the basis of a sample: 1. Is there a fundamental correlation between the high defect rate, and the respective production shift. 2. are there certain defect types in the respective production shifts, that are generated significantly more frequently than other defect types. The special feature of this task is that we are dealing with more than two categories in which the laws of the so-called chi-square distribution apply. In this training unit, we will learn how to properly perform and interpret the corresponding hypothesis test for chi-square distributed data.

MAIN TOPICS MINITAB TUTORIAL 09

- Preparing the data using the „Recode to text“ function
- Counting variables using the „tally individual variables“ function
- Use of bar charts in the chi-square test
- Identify interaction effects using grouped bar charts
- User-specific bar charts as part of the chi-square test
- Hypothesis definition in the chi-square test
- Derivation of the chi-square distribution from a standard normal distribution
- Number of degrees of freedom in the chi-square test
- Interpretation of the “cross tabulation” function in the context of the chi-square test
- Pearson’s chi-square value and the likelihood ratio

**10 ONE-SAMPLE POISSON RATE
**In the 10th Minitab tutorial, we are in the shockpad production of Smartboard Company. Shockpads are plastic plates, that are installed between the skateboard deck and the Axles. They are primarily used to absorb vibrations and shocks while riding. High-priced designer shockpads are currently being produced for a very discerning customer. The rectangular shockpads are manufactured in series production in a stamping process from high-priced polyurethane panels, specially produced for the customer. A punching batch always consists of 500 shockpads and corresponds to a delivery batch. In contrast to other customers, the customer also attaches great importance to the visual appearance of the shockpads. Therefore, the possible defects, scratches, cracks, uneven cut edges and punching cracks, in the punching process must be avoided. As according to the contractual customer-supplier agreement, surface defects are only permitted up to a certain number. Specifically, according to the contractual complaint’s agreement for Smartboard Company, there is a restriction, that each packaging unit consisting of 500 shockpads may contain a maximum total of 25 defects, the distribution of defects in the delivery is irrelevant. The basic defect rate per delivery, which must not be exceeded, is 5%. The central topic in this Minitab tutorial will therefore be to make a 95% reliable statement regarding the actual defect rate in the population of shockpad production on the basis of an existing sample data set. We will learn that hypothesis tests that follow the laws of the so-called, Poisson distribution, can be used in such cases. We will be able to distinguish the difference between total occurrences, and defect rate, and also become more familiar with the Poisson distribution using the associated density function, to gain insight into the normal approximation associated with the Poisson distribution. We will also learn about useful options such as the sum and tally functions. With the knowledge gained, we will then be able to properly perform the corresponding hypothesis test, on total occurrences of Poisson distributed data, in order to be able to make 95% confident statements, about the total occurrences or defect rates in the population of the punching process.

MAIN TOPICS MINITAB TUTORIAL 10

- Total Occurrences and defect rates of poisson distributed data
- Graphical derivation of the Poisson distribution
- Interpreting the probability density of the Poisson distribution
- Normal approximation in the context with the Poisson distribution
- Determine total occurrences in Poisson distribution
- Hypothesis definitions for poisson distributed data
- Working with the sum function in the context of the Poisson distribution
- Working with the function “tally individual variables” in the context of Poisson distribution

**11 ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
**The 11th Minitab tutorial is about the ball bearings used by Smartboard Company for skateboard wheels. An important criterion for the fit is the outer diameter of the ball bearings. Smartboard Company compares three new ball bearing suppliers for this purpose. This training session focuses on the question of, whether the outer diameters of the ball bearings from the three suppliers differ significantly from one another. We will see that the special feature of this task is, that we are now dealing with more than two processes, or more than two sample averages, and therefore the hypothesis tests we have learned so far will not help. Before we start with the actual analysis of variance – often abbreviated to the acronym ANOVA, we will first use descriptive statistics in this Minitab tutorial to get an overview of the location and dispersion parameters of our three supplier data sets. Before starting the actual analysis of variance, a discriminatory power analysis must be carried out to determine the appropriate sample size. In order to understand the principle of variance analysis. For didactic reasons we will first get to know the relatively complex variance analysis step by step on a generally small data set and then use this preliminary work and information to enter into the so-called one-way analysis of variance, which is often referred to as one-way ANOVA, in day-to-day business. So that we can use this analysis approach to determine the corresponding scatter proportions that make up the respective total scatter. We will take a closer look at the ratio of these scatter proportions by using the so-called F-distribution, in reference to its developer, Sir Ronald Fisher. We will learn how to use the F-distribution to determine the probability of a scattering ratio occurring, simply called the F-value. For a better understanding, we will also use the graphical method to derive the p-value for the respective F-value. In the final step, the associated hypothesis tests are used to properly determine whether there are significant differences between the ball bearing suppliers. Interesting and very useful in the context of this one-way analysis of variance, are the so-called grouping letters, generated with the help of the Fisher pairwise comparison test, which will always help us very quickly in our day-to-day business to recognize which ball bearing manufacturers differ significantly from each other.

MAIN TOPICS MINITAB TUTORIAL 11

- Setting up the hypothesis tests as part of the one-way ANOVA
- Adj SS and Adj MS values within the framework of the one-factorial ANOVA
- Derivation of the F-value within the framework of the one-factorial ANOVA
- Derivation of the F-distribution within the framework of the one-factorial ANOVA
- Error bar chart as part of the one-way ANOVA
- Pair difference test according to Fisher
- Interpretation of the grouping letters based on the Fisher LSD method
- Interpretation of the Fisher individual test for differences of mean

**12 GENERAL LINEAR MODEL: 2-WAY ANOVA
**In the 12th Minitab tutorial, we accompany the quality team at Smartboard Company as they examine the material strength of skateboard decks using the so-called 2-Way ANOVA. Incidentally North American maple wood is always used as the base material for high-quality skateboard decks, as this type of wood is particularly stable and resistant due to its slow growth. To produce the skateboard decks, two layers of maple wood are first pressed together under pressure with water-based glue, and a special epoxy resin mixture, in an automated laminating process. The connection of the first two layers of wood in the core, is particularly crucial for the cohesion of the entire laminated composite. The quality of this core lamination is tested randomly in a tensile shear test, in which the two laminated layers of wood are pulled apart, by applying a force parallel to the joint surface, until the laminate joint tears open. In principle, the higher the maximum tensile shear strength of the laminate joint achieved in the laboratory test, the better. In this Minitab tutorial, we will be dealing with two categorical factors, each of which is available in three categorical factor levels. The core objective of this training unit will be to draw a statistically indirect conclusion about the production population on the basis of a sample, as to whether the corresponding factors have a significant influence on the tensile shear strength. We will also analyze, whether there are any so-called interaction effects, between the influencing factors, which may indirectly influence each other, and thus also indirectly significantly influence our response variable tensile strength. However we need to carry out some data management in advance, as the structure of the measurement protocol makes it necessary to restructure the data first. The aim of data management is to get to know the very useful option, Stacking of column blocks. To get a first impression of the trends and tendencies of our data, we will then work with boxplots before starting with variance analysis, 2-way ANOVA. Well prepared, we will then move on to the actual 2-Way analysis of variance in order to assess the significance of the trends and tendencies identified in the boxplots. In this context, we will also get to know the very useful main effects- and interaction plots. And learn how to interpret main and interaction diagrams. Finally, we will be able to use the so-called Tukey’s significance test, and the associated grouping letters, to work out which of the parameter constellations can actually be declared as significant.

MAIN TOPICS MINITAB TUTORIAL 12

- 2-Way ANOVA, fundamentals
- Data management in the preview window for data import
- Stacking of column blocks within the framework of ANOVA
- Boxplot analysis within the framework of ANOVA
- Adjust interquartile ranges graphically
- Include reference lines as part of the boxplot analysis
- Definition of the „general linear model (ALM, GLM)“
- Interpretation and evaluation of the variance and residual analysis
- Working with the „marking palette“ in the context of residual analysis
- Interpretation of the ANOVA model quality
- Working with the histogram in the context of residual analysis
- Factor diagrams in the context of ANOVA
- Interpretation and editing of interaction diagrams
- Tukey’s pairwise comparisons test

**13 BLOCKED ANOVA
**In the 13th Minitab tutorial, we find ourselves in the maintenance and servicing department of Smartboard Company. This department is responsible for all improvement measures to ensure the best possible process availability, process efficiency, and quality output, throughout the entire skateboard production process. In order to be able to evaluate the process availability, process efficiency and quality output of skateboard production with just one key figure, Smartboard Company uses the industry-proven key performance indicator, O.E.E., as a measure of overall equipment effectiveness. The higher the O.E.E. figure, the better the overall equipment effectiveness of skateboard production. The maintenance and servicing department has identified performance fluctuations in skateboard production, based on the O.E.E. indicator, and suspects that these fluctuations may be due to the different product variants. The aim of this Minitab tutorial is to find out, whether the different product variants, such as longboard, e-board, or mountainboard, have a statistically significant influence on the overall equipment effectiveness O.E.E. The so-called quality parameters of our variance model will be very important in this training unit, as they will give us an indication of how well our variance model can explain the proportion of the total variance. In this context, we will work in particular with the quality parameter for example, adjusted R-squared value, in order to assess whether our blocked variance model actually has a high model quality. And we will take this opportunity to familiarize ourselves with the useful table Adjustments and evaluations for unusual observations, which provides us with a compact compilation of conspicuous non-descriptive scatter components, known as so-called residuals. In order to be able to assess this residual dispersion graphically, we will get to know the very useful graphic option called, 4-in-1-diagram. We will also use the very helpful factor diagrams, and Tukey’s pairwise comparison test, to specifically identify which factor levels have significant, or non-significant effects on our response variable, overall equipment effectiveness, O.E.E. Based on the corresponding grouping letters and the corresponding so-called graphic tukey simultaneous test of means, we will finally be able to make a 95 % certain recommendation to the management of Smartboard Company, as to which targeted optimization measures should be implemented, depending on the respective product typ.

MAIN TOPICS MINITAB TUTORIAL 13

- Blocked ANOVA, fundamentals
- Interpretation of the quality measures R-sq and R-sq(adj)
- Residual analysis within the framework of the blocked ANOVA
- Fits and diagnostics for unusual observations
- Factor diagrams within the framework of the blocked ANOVA
- tukey simultaneous test of means
- Residual analysis as part of the blocked ANOVA
- Tukey simultaneous test for differences in mean
- Tukey simultaneous 95% confidence interval chart for differences in means

**14 SIMPLE CORRELATION AND SIMPLE REGRESSION
**In the 14th Minitab tutorial, we visit the heat treatment facility at Smartboard Company. Here the skateboard axles are subjected to heat treatment, in order to achieve the material strength required by the customer. In addition to the heat treatment parameters, the copper content in the skateboard axles also has an influence on the material strength. Against this background, this Minitab tutorial will investigate the relationship between copper content and material strength, on the basis of existing historical process data. For this purpose, we will first use a simple correlation analysis in this Minitab training, to investigate whether a reciprocal relationship can be established between the amount of copper in the material, and the axle material strength. If this is the case, we will use a simple regression analysis to show which copper content is ultimately required to achieve the material strength desired by the customer. As part of our correlation analysis, we will become familiar with the important Pearson correlation factor, in order to obtain a quantitative statement, as to whether the relevant influencing factors correlate weakly, strongly, or not at all. In this context, we will learn the basic principle of correlation analysis, based on the method of least squares, in detail by actively calculating a complete correlation analysis step by step by using a simplified data set, in order to understand how the results in the output window were obtained. Finally, we can use a simple regression analysis, to describe our technical problem with a mathematical regression equation, in order to predict future material strengths, as a function of the influencing factor copper content, with a high prediction quality.

MAIN TOPICS MINITAB TUTORIAL 14

- Simple correlation analysis according to Pearson
- Correlation matrix
- Table of „pairwise correlations“
- Hypothesis test for pairwise correlation according to Pearson
- Working with „drawing tools“ in the context of the matrix plot
- Simple regression analysis
- Adjusting regression model
- Least squares method
- Interpretation of fitted line plot
- Residual analysis as part of the regression
- Confidence intervals and prediction intervals
- Predicting the response variable by using the regression model

**15 MULTIPLE CORRELATION ANALYSIS, MATRIX PLOT
**In the 15th Minitab tutorial, we are on the high-speed test track of Smartboard Company. On this outdoor test track, which stretches downhill over several kilometers, the skateboards developed for speed records are tested. The maximum speed achieved on the test track is recorded using light barriers along the track. Ten skateboard pilots with different riding qualities are available as test riders representing the different riding behavior of the entire customers. In order to reduce the very high personnel costs for the ten test pilots in the future, we will accompany the team in this Minitab training session as they use a multiple correlation analysis to work out which of the ten test pilots have identical speed profiles. The key objective in this training session will be, to identify possible strong correlations between quantitative factors using pairwise correlation analysis. We will get to know the useful correlation matrix, which is often simply referred to as matrix plot, in day-to-day business. And we will see, how we can use the matrix plot to obtain an efficient qualitative overview of the potential correlation trends between the test pilots. Building on this, we will move on to the actual Pearson correlation analysis, in order to substantiate our findings from the correlation matrix. Finally, we will use the corresponding significance values from the „Pearson pairwise correlation“ hypothesis test, to assess the statistical significance of the correlations again with a 95% confidence level.

MAIN TOPICS MINITAB TUTORIAL 15

- Multiple correlation analysis according to Pearson
- Interpreting the correlation matrix
- Interpreting the table of „pairwise correlations“
- Set up the corresponding hypothesis tests
- Pairwise correlation analysis according to Pearson

**16 POLYNOMIAL REGRESSION
**In the 16th Minitab tutorial, we are once again in the heat treatment department at Smartboard Company. Due to the current high order situation, the heat treatment plant is currently a bottleneck unit, and the quality team should therefore investigate whether it is possible to achieve the axle strength previously required by customers even with reduced annealing times by increasing the annealing temperature. By increasing the annealing temperature, the annealing times of the skateboard axles in the heat treatment plant could be shortened, so that more axles can be heat treated faster. In this Minitab course, we will first determine the corresponding Pearson correlation factors using a simple correlation analysis. Based on these findings, we will apply the useful so-called polynomial regression analysis, to mathematically model the reciprocal relationships between influencing variables and the response variable. Starting from a linear model, we will first generate a quadratic, and then a cubic model, and compare them with each other. Using the corresponding residual diagrams, we will examine why a cubic regression equation is preferable to a linear or quadratic regression equation in this training unit. Finally, we will enter into the very useful interactive response variable optimization, and with our previously determined regression equation, we will be able to determine the required best possible parameter settings with a 95% certainty within the framework of the response variable optimization.

MAIN TOPICS MINITAB TUTORIAL 16

- Polynomial regression
- Correlation analysis
- Correlation matrix
- Table of „pairwise correlations“
- Hypothesis test as part of the pairwise correlation analysis according to Pearson
- Reference lines in the matrix plot
- Polynomial regression
- „4 in 1“ – residual diagram
- Quadratic and cubic regression models
- Response variable optimization in the context of regression analysis
- Confidence and prediction intervals in the context of the regression analysis

**17 POLYNOMIAL REGRESSION WITH BACKWARD ELIMINATION
**In the 17th Minitab tutorial, we visit the Smartboard Company high-speed test track again. On this outdoor test track, which stretches downhill over several miles, the influence of the following parameters on the maximum achievable speed of a skateboard prototype is to be tested: The deck width in millimeters, the so-called deck flex as a measure of the elasticity of the skateboard decks in the two stages medium and hard, as well as the wheel hardness. The wheel hardness is determined according to the standardized Shore hardness test method. A metal pin with a geometrically standardized truncated cone tip, and a standardized spring force, and application time is pressed into the wheel surface. The greater the resistance of the skateboard wheel to the penetration of the metal pin the greater the hardness value achieved. The central aim of this Minitab tutorial is to find out which of the three influencing variables, also known as predictors, have a significant effect on our response variable – in this case the wheel hardness. To do this, we will first work with the so-called matrix plot to create a visual overview of possible trends and tendencies in advance. We will then use Pearson’s correlation factor to numerically assess the characteristics of the trends and tendencies identified, and derive our corresponding variance model using polynomial regression analysis. As part of the evaluation of our variance model based on the classic quality parameters, we will become familiar with other quality parameters in this context, such as the PRESS value, or the Mallow cp value. We will then get to know the „backward elimination“ method, which is very important for model adjustment in order to remove non-significant terms from our variance model. Finally, we will learn about the very helpful and efficient option of automated backward elimination, and the regression of the best subsets, so that we can finally use the available results to make a statement about the extent to which the respective influencing variables affect the roll hardness.

MAIN TOPICS MINITAB TUTORIAL 17

- Polynomial regression with backward elimination
- Correlation matrix
- Editing the correlation matrix
- Correlation analysis according to Pearson
- Table of „pairwise correlations“
- Analysis of the residual scatter
- Automated backward elimination
- Table Regression of the best subsets
- Quality parameters PRESS, Mallows-Cp, AICc, BIC

**18 GAGE R&R STUDY CROSSED
**In the 18th Minitab tutorial, we will look at the final assembly of Smartboard Company and accompany the quality team as they carry out a so-called measurement system analysis, Gage R&R study crossed. In the first part of our multi-part Minitab tutorial, we will first familiarize ourselves with the fundamentals in order to understand the most important definitions, such as measurement accuracy, repeatability, and reproducibility, as well as linearity stability and resolution. Well equipped with the fundamentals, we will then move on to the practical implementation of crossed measurement system analysis in the second part, and take the opportunity to learn the difference between crossed and nested measurement system analysis. We will learn that there are basically two mathematical approaches to performing crossed measurement system analysis: One is the ARM method and the other is the ANOVA method. In order to better understand both methods, we will first carry out our crossed measurement system analysis using the ARM method in the second part of this training unit and, for comparison, carry out a crossed measurement system analysis using the ANOVA method on the basis of the same data set in the third part of this training unit. We will actively calculate both methods manually step by step, and derive the corresponding measurement system parameters, in order to understand how the results were generated in the Minitab output window.

MAIN TOPICS MINITAB TUTORIAL 18, part 1

- National and international MSA standards in comparison
- Process variation versus measurement system variation
- measurement accuracy
- Repeatability versus Reproducibility
- Linearity, Stability, Bias
- Number of distinct categories, tolerance resolution

MAIN TOPICS MINITAB TUTORIAL 18, part 2

- ARM analysis approach as part of a crossed measurement system analysis
- Difference between crossed, nested and expanded MSA
- Manual derivation of all scattering components according to the ARM method
- Manual derivation of the tolerance resolution based on the ndc parameter
- Operator-related R-chart analysis
- Manual derivation of the control limits in the R-chart
- Operator-related Xbar-chart analysis
- Manual derivation of the control limits in the Xbar-chart
- Individual value diagram for analyzing the data scatter
- Operator-dependent boxplot analysis
- Interpretation the Gage R&R Report
- Set ID variables in the Gage R&R Report

MAIN TOPICS MINITAB TUTORIAL 18, part 3

- ANOVA analysis approach as part of MSA crossed
- 2-way ANOVA table with interactions
- Manual derivation of all scattering components according to ANOVA method
- Manual calculation of the ndc-parameter
- Hypothesis test within the framework of the ANOVA method
- Hypothesis test regarding interaction effects

**18 GAGE R&R STUDY CROSSED
**In the 18th Minitab tutorial, we will look at the final assembly of Smartboard Company and accompany the quality team as they carry out a so-called measurement system analysis, Gage R&R study crossed. In the first part of our multi-part Minitab tutorial, we will first familiarize ourselves with the fundamentals in order to understand the most important definitions, such as measurement accuracy, repeatability, and reproducibility, as well as linearity stability and resolution. Well equipped with the fundamentals, we will then move on to the practical implementation of crossed measurement system analysis in the second part, and take the opportunity to learn the difference between crossed and nested measurement system analysis. We will learn that there are basically two mathematical approaches to performing crossed measurement system analysis: One is the ARM method and the other is the ANOVA method. In order to better understand both methods, we will first carry out our crossed measurement system analysis using the ARM method in the second part of this training unit and, for comparison, carry out a crossed measurement system analysis using the ANOVA method on the basis of the same data set in the third part of this training unit. We will actively calculate both methods manually step by step, and derive the corresponding measurement system parameters, in order to understand how the results were generated in the Minitab output window.

MAIN TOPICS MINITAB TUTORIAL 18, part 1

- National and international MSA standards in comparison
- Process variation versus measurement system variation
- measurement accuracy
- Repeatability versus Reproducibility
- Linearity, Stability, Bias
- Number of distinct categories, tolerance resolution

MAIN TOPICS MINITAB TUTORIAL 18, part 2

- ARM analysis approach as part of a crossed measurement system analysis
- Difference between crossed, nested and expanded MSA
- Manual derivation of all scattering components according to the ARM method
- Manual derivation of the tolerance resolution based on the ndc parameter
- Operator-related R-chart analysis
- Manual derivation of the control limits in the R-chart
- Operator-related Xbar-chart analysis
- Manual derivation of the control limits in the Xbar-chart
- Individual value diagram for analyzing the data scatter
- Operator-dependent boxplot analysis
- Interpretation the Gage R&R Report
- Set ID variables in the Gage R&R Report

MAIN TOPICS MINITAB TUTORIAL 18, part 3

- ANOVA analysis approach as part of MSA crossed
- 2-way ANOVA table with interactions
- Manual derivation of all scattering components according to ANOVA method
- Manual calculation of the ndc-parameter
- Hypothesis test within the framework of the ANOVA method
- Hypothesis test regarding interaction effects

**18 GAGE R&R STUDY CROSSED
**In the 18th Minitab tutorial, we will look at the final assembly of Smartboard Company and accompany the quality team as they carry out a so-called measurement system analysis, Gage R&R study crossed. In the first part of our multi-part Minitab tutorial, we will first familiarize ourselves with the fundamentals in order to understand the most important definitions, such as measurement accuracy, repeatability, and reproducibility, as well as linearity stability and resolution. Well equipped with the fundamentals, we will then move on to the practical implementation of crossed measurement system analysis in the second part, and take the opportunity to learn the difference between crossed and nested measurement system analysis. We will learn that there are basically two mathematical approaches to performing crossed measurement system analysis: One is the ARM method and the other is the ANOVA method. In order to better understand both methods, we will first carry out our crossed measurement system analysis using the ARM method in the second part of this training unit and, for comparison, carry out a crossed measurement system analysis using the ANOVA method on the basis of the same data set in the third part of this training unit. We will actively calculate both methods manually step by step, and derive the corresponding measurement system parameters, in order to understand how the results were generated in the Minitab output window.

MAIN TOPICS MINITAB TUTORIAL 18, part 1

- National and international MSA standards in comparison
- Process variation versus measurement system variation
- measurement accuracy
- Repeatability versus Reproducibility
- Linearity, Stability, Bias
- Number of distinct categories, tolerance resolution

MAIN TOPICS MINITAB TUTORIAL 18, part 2

- ARM analysis approach as part of a crossed measurement system analysis
- Difference between crossed, nested and expanded MSA
- Manual derivation of all scattering components according to the ARM method
- Manual derivation of the tolerance resolution based on the ndc parameter
- Operator-related R-chart analysis
- Manual derivation of the control limits in the R-chart
- Operator-related Xbar-chart analysis
- Manual derivation of the control limits in the Xbar-chart
- Individual value diagram for analyzing the data scatter
- Operator-dependent boxplot analysis
- Interpretation the Gage R&R Report
- Set ID variables in the Gage R&R Report

MAIN TOPICS MINITAB TUTORIAL 18, part 3

- ANOVA analysis approach as part of MSA crossed
- 2-way ANOVA table with interactions
- Manual derivation of all scattering components according to ANOVA method
- Manual calculation of the ndc-parameter
- Hypothesis test within the framework of the ANOVA method
- Hypothesis test regarding interaction effects

**19 CROSSED MSA STUDY, NON-VALID
**In the 19th Minitab training tutorial, we are back in the final assembly of Smartboard Company. Analogous to the previous training unit a crossed measurement system analysis will also be carried out, but with the difference that in this case we will get to know a measurement system, that is to be classified as an unacceptable measurement system in terms of the standard specifications. The core objective of this Minitab tutorial will be, to work out the appropriate procedure for identifying the causes of the inadequate measurement system, on the basis of the existing poor-quality criteria. In this context, we will also get to know other useful functions for day-to-day business, such as the creation of the Gage R and R study Worksheet layout for the actual data acquisition as part of the measurement system analysis, or the very useful graphic „gage run chart“, which is not automatically generated by default as part of a measurement system analysis. We will see, that we can generate this measurement run chart very easily and thus obtain further additional information, in order to derive specific improvement measures for the non-valid measurement system.

MAIN TOPICS MINITAB TUTORIAL 19

- Crossed Gage R&R measurement system analysis
- Create Gage R&R study worksheet layout
- Measurement system variation in relation to the customer specification limits
- Manual derivation of the control limits in the R-Chart
- Analysis of the interaction effects by using the interaction plot
- Analysis of error classifications
- R-Chart and Xbar-Chart as part of the measurement system analysis
- Single value plot as part of the measurement system analysis
- Boxplots as part of the measurement system analysis
- Working with identification variables and marking palette
- Interaction plot between test object and operators
- Working with the Gage run chart

**20 NESTED MSA GAGE R&R STUDY
**In the 20th Minitab tutorial, we are at the axle Test bench laboratory of the smartboard company. This is where the dynamic load properties of the skateboard Axles produced are examined. The skateboard axles are subjected to a dynamically increasing oscillating stress on the axle test bench, until the load limit is reached and the axle breaks. For us this means, that this time we are dealing with destructive material testing and therefore the respective test parts cannot be tested several times and therefore for example, repeatability or reproducibility, cannot be tested several times to determine the important measurement system parameters. In this Minitab tutorial, we will therefore get to know the so-called nested gage R&R study, as the method of choice, in order to understand which conditions must be met as a basic prerequisite for a nested measurement system analysis to work. In this context, we will apply the industry-proven 40:4 rule, to ensure a sufficient sample size for a nested measurement system analysis. And using appropriate hypothesis tests as part of the nested measurement system analysis, we will evaluate, whether the testers or the production batches have a significant influence on the scattering behavior of our measurement results. We will also get to know the important ndc parameter, as a quality measure for the resolution of our measurement system. We will learn to understand how we can use this key parameter to assess, whether the resolution of our measuring system is sufficiently high in terms of the standard to be used, in the context of process optimization. Using the corresponding variance components, we will then work out whether the measurement system scatter makes up an impermissibly high proportion of the total scatter, and is therefore possibly above the permissible total scatter according to the standard specification. We will examine the scattering behavior of the testers to determine whether and, if so, which of the testers makes the strongest contribution to the measurement system scatter. In this context, we will use useful graphical representations such as quality control charts and box plots, to visually identify anomalies in the scattering behavior. Based on the identified causes, we will finally be able to derive reliable recommendations for action to improve the measurement system and will also carry out a new measurement system analysis after the recommended improvement measures. The results of our improved measurement system will then be compared with the results of the original measurement system.

MAIN TOPICS MINITAB TUTORIAL 20, part 1

- Nested measurement system analysis
- Boundary conditions for a nested measurement system analysis
- Principle of homogeneity in the context of a nested measurement system analysis
- Repeatability as part of a nested measurement system analysis
- Reproducibility as part of a nested measurement system analysis
- Interpretation of the Gage R&R nested report

MAIN TOPICS MINITAB TUTORIAL 20, part 2

- Weak point analysis of an invalid measuring system
- Variance components
- R-Chart and Xbar chart
- single value plot
- Boxplot and scatter plot
- Carrying out a second nested measurement system analysis
- Trailer: see 20.1

**20 NESTED MSA GAGE R&R STUDY
**In the 20th Minitab tutorial, we are at the axle Test bench laboratory of the smartboard company. This is where the dynamic load properties of the skateboard Axles produced are examined. The skateboard axles are subjected to a dynamically increasing oscillating stress on the axle test bench, until the load limit is reached and the axle breaks. For us this means, that this time we are dealing with destructive material testing and therefore the respective test parts cannot be tested several times and therefore for example, repeatability or reproducibility, cannot be tested several times to determine the important measurement system parameters. In this Minitab tutorial, we will therefore get to know the so-called nested gage R&R study, as the method of choice, in order to understand which conditions must be met as a basic prerequisite for a nested measurement system analysis to work. In this context, we will apply the industry-proven 40:4 rule, to ensure a sufficient sample size for a nested measurement system analysis. And using appropriate hypothesis tests as part of the nested measurement system analysis, we will evaluate, whether the testers or the production batches have a significant influence on the scattering behavior of our measurement results. We will also get to know the important ndc parameter, as a quality measure for the resolution of our measurement system. We will learn to understand how we can use this key parameter to assess, whether the resolution of our measuring system is sufficiently high in terms of the standard to be used, in the context of process optimization. Using the corresponding variance components, we will then work out whether the measurement system scatter makes up an impermissibly high proportion of the total scatter, and is therefore possibly above the permissible total scatter according to the standard specification. We will examine the scattering behavior of the testers to determine whether and, if so, which of the testers makes the strongest contribution to the measurement system scatter. In this context, we will use useful graphical representations such as quality control charts and box plots, to visually identify anomalies in the scattering behavior. Based on the identified causes, we will finally be able to derive reliable recommendations for action to improve the measurement system and will also carry out a new measurement system analysis after the recommended improvement measures. The results of our improved measurement system will then be compared with the results of the original measurement system.

MAIN TOPICS MINITAB TUTORIAL 20, part 1

- Nested measurement system analysis
- Boundary conditions for a nested measurement system analysis
- Principle of homogeneity in the context of a nested measurement system analysis
- Repeatability as part of a nested measurement system analysis
- Reproducibility as part of a nested measurement system analysis
- Interpretation of the Gage R&R nested report

MAIN TOPICS MINITAB TUTORIAL 20, part 2

- Weak point analysis of an invalid measuring system
- Variance components
- R-Chart and Xbar chart
- single value plot
- Boxplot and scatter plot
- Carrying out a second nested measurement system analysis
- Trailer: see 20.1

**21 MEASUREMENT SYSTEM ANALYSIS: STABILITY AND LINEARITY
**In the 21st Minitab tutorial, we accompany the ultrasonic testing laboratory of Smartboard Company. In this department, the manufactured skateboard axles are subjected to ultrasonic testing to ensure, that no undesirable cavities have formed in the axle material during axle production. In materials science, cavities are microscopically small, material-free areas which above a certain size can lead to a weakening of the material and thus to premature axle breakage even under the slightest stress. The ultrasonic testing used by Smartboard Company for this axle test is one of the classic non-destructive acoustic testing methods in materials testing, and is based on the acoustic principle that sound waves are reflected to different degrees in different material environments. Depending on the size of the cavity in the axis material, these sound waves are then reflected back to the ultrasonic probe to varying degrees. The size and position of the cavity in micrometers is calculated from the time it takes for the emitted ultrasonic echo to be reflected back to the probe. The focus of this Minitab tutorial is to evaluate the ultrasonic testing device with regard to the measurement system criteria of linearity and stability, in order to detect any systematic measurement deviations. Specifically, we will learn how the stability and linearity parameters can be used to find out how accurately the ultrasonic device can measure over the entire measuring range. For this purpose, the ultrasonic testing team will randomly select ten representative skateboard axles, based on the recommendations of the AIAG standard regulations, and subject them to ultrasonic testing to determine the size of any cavities in the axle material. Before we get into the stability and linearity analysis, we will first get to know the useful function of variable counting, as part of data management. We will then apply appropriate hypothesis tests to identify significant anomalies in terms of linearity and stability. We will use the useful graphic scatter plot, to give us a visual impression of the trends and tendencies, with regard to linearity and stability, so that we can make a statement about the existing measurement system stability and linearity, on the basis of the corresponding quality criteria and the regression equation. Finally, we will carry out an optimization of the measurement system based on our analysis results, and then reassess whether the implemented optimization measures have improved our measurement system stability and linearity.

MAIN TOPICS MINITAB TUTORIAL 21

- Measuring system stability and linearity, fundamentals
- Analysis of the measuring system stability
- Analysis of the measuring system linearity
- “Tally individual variables“ function
- Linearity analysis by using the regression equation
- Correction of the systematic measurement deviations

** **

**22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART)
**In the 22nd Minitab tutorial, we accompany the final inspection station of Smartboard Company. Here the skateboards assembled in the early late and night shifts, are subjected to a final visual surface inspection before being shipped to the customer, and declared as a good or bad part depending on the amount of surface scratches. Skateboards with a „GOOD“ rating are sent to the customer, while skateboards with a „BAD“ rating have to be scrapped at great expense. One employee is available for the visual surface inspection in each production shift, so that in three production shifts a total of three different surface appraisers classify the skateboards as „GOOD“ and „BAD“. Our task in this Minitab tutorial will be to check whether all three appraisers have an identical understanding of quality, with regard to repeatability and reproducibility, in their quality assessments. In contrast to the previous training units, however, in this training unit we are no longer dealing with continuously scaled quality assessments, but with the attributive quality assessments „good part“ and „bad part“. Before we get into the measurement system analysis required for this, we will first get an overview of the three important scale levels nominal, ordinal, and cardinal scale. And create the useful measurement protocol layout function, for our agreement check. We will then use the complete data set to analyze the appraiser matches, and evaluate the corresponding match rates using the so-called Fleiss Kappa statistic, and the corresponding Kappa values. We will actively calculate the principle of the so-called Kappa statistic, or Cohen’s Kappa statistic, by using a simple data set and understand how the corresponding results appear in the output window. We will learn how the Kappa statistic helps us to obtain a statement for example, about the probability that a match rate achieved by the appraisers could also have occurred at random. We will first learn to evaluate the agreement rate within the appraisers using Kappa statistics and then see how the final inspection team also uses Kappa statistics to evaluate the agreement of the appraisers’ assessments with the customer requirement. And we will be able to find out whether an inspector tends to declare actual bad parts as good parts or vice versa. After the compliance tests in relation to the customer standard, we will then examine how often not only the appraisers agreed with each other, but also how well the Agreement rate of the appraiser team as a whole can be classified in relation to the customer standard. With these findings, we will then be in a position to make appropriate recommendations for action, for example to achieve a uniform understanding of quality in line with customer requirements as part of appraiser training. In this context, we will also become familiar with the two very useful graphical forms of presentation Agreement of assessments within the appraisers, and Appraisers compared to the standard. We can verify that these graphs are always very helpful especially in day-to-day business, for example to get a quick visual impression of the most important information regarding our attributive agreement analysis results.

MAIN TOPICS MINITAB TUTORIAL 22, part 1

- Scale levels, fundamentals
- Nominal, ordinal, cardinally scaled data types
- Discrete versus continuous data

MAIN TOPICS MINITAB TUTORIAL 22, part 2

- Sample size for attributive MSA according to AIAG
- Appraiser agreement rate for attributive data, principle
- Create measurement report layout for appraiser agreement
- Performing the appraiser agreement analysis for attributive data
- Analysis of agreement rate within the appraisers
- Fleiss-Kappa and Cohen-Kappa statistics

MAIN TOPICS MINITAB TUTORIAL 22, Part 3

- Analysis of appraiser versus standard compliance
- Assessment of appraiser agreement based on the Fleiss-Kappa statistic
- Kappa statistic for assessing the coincidental match rate
- Analysis of the mismatch of the appraiser
- Graphical MS analysis within the appraisers
- Graphical MS appraiser analysis compared to the customer standard
- Trailer: see 22.1

**22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART)
**In the 22nd Minitab tutorial, we accompany the final inspection station of Smartboard Company. Here the skateboards assembled in the early late and night shifts, are subjected to a final visual surface inspection before being shipped to the customer, and declared as a good or bad part depending on the amount of surface scratches. Skateboards with a „GOOD“ rating are sent to the customer, while skateboards with a „BAD“ rating have to be scrapped at great expense. One employee is available for the visual surface inspection in each production shift, so that in three production shifts a total of three different surface appraisers classify the skateboards as „GOOD“ and „BAD“. Our task in this Minitab tutorial will be to check whether all three appraisers have an identical understanding of quality, with regard to repeatability and reproducibility, in their quality assessments. In contrast to the previous training units, however, in this training unit we are no longer dealing with continuously scaled quality assessments, but with the attributive quality assessments „good part“ and „bad part“. Before we get into the measurement system analysis required for this, we will first get an overview of the three important scale levels nominal, ordinal, and cardinal scale. And create the useful measurement protocol layout function, for our agreement check. We will then use the complete data set to analyze the appraiser matches, and evaluate the corresponding match rates using the so-called Fleiss Kappa statistic, and the corresponding Kappa values. We will actively calculate the principle of the so-called Kappa statistic, or Cohen’s Kappa statistic, by using a simple data set and understand how the corresponding results appear in the output window. We will learn how the Kappa statistic helps us to obtain a statement for example, about the probability that a match rate achieved by the appraisers could also have occurred at random. We will first learn to evaluate the agreement rate within the appraisers using Kappa statistics and then see how the final inspection team also uses Kappa statistics to evaluate the agreement of the appraisers’ assessments with the customer requirement. And we will be able to find out whether an inspector tends to declare actual bad parts as good parts or vice versa. After the compliance tests in relation to the customer standard, we will then examine how often not only the appraisers agreed with each other, but also how well the Agreement rate of the appraiser team as a whole can be classified in relation to the customer standard. With these findings, we will then be in a position to make appropriate recommendations for action, for example to achieve a uniform understanding of quality in line with customer requirements as part of appraiser training. In this context, we will also become familiar with the two very useful graphical forms of presentation Agreement of assessments within the appraisers, and Appraisers compared to the standard. We can verify that these graphs are always very helpful especially in day-to-day business, for example to get a quick visual impression of the most important information regarding our attributive agreement analysis results.

MAIN TOPICS MINITAB TUTORIAL 22, part 1

- Scale levels, fundamentals
- Nominal, ordinal, cardinally scaled data types
- Discrete versus continuous data

MAIN TOPICS MINITAB TUTORIAL 22, part 2

- Sample size for attributive MSA according to AIAG
- Appraiser agreement rate for attributive data, principle
- Create measurement report layout for appraiser agreement
- Performing the appraiser agreement analysis for attributive data
- Analysis of agreement rate within the appraisers
- Fleiss-Kappa and Cohen-Kappa statistics

MAIN TOPICS MINITAB TUTORIAL 22, Part 3

- Analysis of appraiser versus standard compliance
- Assessment of appraiser agreement based on the Fleiss-Kappa statistic
- Kappa statistic for assessing the coincidental match rate
- Analysis of the mismatch of the appraiser
- Graphical MS analysis within the appraisers
- Graphical MS appraiser analysis compared to the customer standard
- Trailer: see 22.1

**22 ATTRIBUTIVE AGREEMENT ANALYSIS (GOOD PART, BAD PART)
**In the 22nd Minitab tutorial, we accompany the final inspection station of Smartboard Company. Here the skateboards assembled in the early late and night shifts, are subjected to a final visual surface inspection before being shipped to the customer, and declared as a good or bad part depending on the amount of surface scratches. Skateboards with a „GOOD“ rating are sent to the customer, while skateboards with a „BAD“ rating have to be scrapped at great expense. One employee is available for the visual surface inspection in each production shift, so that in three production shifts a total of three different surface appraisers classify the skateboards as „GOOD“ and „BAD“. Our task in this Minitab tutorial will be to check whether all three appraisers have an identical understanding of quality, with regard to repeatability and reproducibility, in their quality assessments. In contrast to the previous training units, however, in this training unit we are no longer dealing with continuously scaled quality assessments, but with the attributive quality assessments „good part“ and „bad part“. Before we get into the measurement system analysis required for this, we will first get an overview of the three important scale levels nominal, ordinal, and cardinal scale. And create the useful measurement protocol layout function, for our agreement check. We will then use the complete data set to analyze the appraiser matches, and evaluate the corresponding match rates using the so-called Fleiss Kappa statistic, and the corresponding Kappa values. We will actively calculate the principle of the so-called Kappa statistic, or Cohen’s Kappa statistic, by using a simple data set and understand how the corresponding results appear in the output window. We will learn how the Kappa statistic helps us to obtain a statement for example, about the probability that a match rate achieved by the appraisers could also have occurred at random. We will first learn to evaluate the agreement rate within the appraisers using Kappa statistics and then see how the final inspection team also uses Kappa statistics to evaluate the agreement of the appraisers’ assessments with the customer requirement. And we will be able to find out whether an inspector tends to declare actual bad parts as good parts or vice versa. After the compliance tests in relation to the customer standard, we will then examine how often not only the appraisers agreed with each other, but also how well the Agreement rate of the appraiser team as a whole can be classified in relation to the customer standard. With these findings, we will then be in a position to make appropriate recommendations for action, for example to achieve a uniform understanding of quality in line with customer requirements as part of appraiser training. In this context, we will also become familiar with the two very useful graphical forms of presentation Agreement of assessments within the appraisers, and Appraisers compared to the standard. We can verify that these graphs are always very helpful especially in day-to-day business, for example to get a quick visual impression of the most important information regarding our attributive agreement analysis results.

MAIN TOPICS MINITAB TUTORIAL 22, part 1

- Scale levels, fundamentals
- Nominal, ordinal, cardinally scaled data types
- Discrete versus continuous data

MAIN TOPICS MINITAB TUTORIAL 22, part 2

- Sample size for attributive MSA according to AIAG
- Appraiser agreement rate for attributive data, principle
- Create measurement report layout for appraiser agreement
- Performing the appraiser agreement analysis for attributive data
- Analysis of agreement rate within the appraisers
- Fleiss-Kappa and Cohen-Kappa statistics

MAIN TOPICS MINITAB TUTORIAL 22, Part 3

- Analysis of appraiser versus standard compliance
- Assessment of appraiser agreement based on the Fleiss-Kappa statistic
- Kappa statistic for assessing the coincidental match rate
- Analysis of the mismatch of the appraiser
- Graphical MS analysis within the appraisers
- Graphical MS appraiser analysis compared to the customer standard
- Trailer: see 22.1

**23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS)
**In the 23rd Minitab tutorial, we are in the final assembly department of Smartboard Company. Here in the early late and night shifts, all the individual skateboard components are assembled into a finished skateboard and subjected to a final visual surface inspection, before dispatch to the customer. Depending on their visual appearance, the skateboards receive corresponding integer quality grades from 1 to 5, without intermediate grades. Grade 1 indicates a damage-free very good skateboard, up to grade 5 for skateboards with very severe surface damage. One surface appraiser is available for visual quality control in each production shift, so that in three production shifts a total of three different surface appraisers rate the skateboards with quality grades 1 to 5. The core of this Minitab tutorial will be, to check whether all three appraisers have a high level of repeatability in their own assessments, and whether all three appraisers have a sufficiently identical understanding of customer quality. And finally, it is important to check whether the team of appraisers as a whole has the same understanding of quality as the customer. In contrast to the previous training unit, in which only the two binary answer options „good“ and „bad“ were possible, in this training unit we are dealing with an appraiser agreement analysis, in which five answers are possible, which then also have a different value in relation to each other. For example, a score of 1 for very good has a completely different qualitative value, than a score of 5 for poor.

Before we get into the attributive agreement analysis, we will first learn how to create a measurement protocol layout, if characteristic carriers are available in an ordered sequence of values. We will then move into analyzing the appraiser matches with the complete data set, and learn how to evaluate the match test within the appraisers. To assess the appraiser agreements, we will also learn how we can use the so-called Fleiss-Kappa statistics, in addition to the classic match rates in percent, in order to derive a statement about the expected future match rate, with a correspondingly defined probability of error. We will then get to know the very important so-called Kendall concordance coefficient, which – in contrast to the Kappa value – not only provides an absolute statement as to whether there is a match, but can also provide a statement about the severity of the wrong decisions through a relative consideration of the deviations. With this knowledge, we will then also be able to assess the agreement rate of the appraisers’ assessments in comparison to the customer standard, and also find out how we can use the corresponding quality criteria to work out how often the appraisers were of the same opinion, i.e., whether the appraisers have the same understanding of quality. In addition to the Kendall concordance coefficient, we will also get to know the so-called Kendall correlation coefficient, which helps us to obtain additional information about whether for example, a appraiser tends to make less demanding judgments and therefore undesirably classifies a skateboard, that is inadequate from the customer’s point of view as a very good skateboard.

MAIN TOPICS MINITAB TUTORIAL 23, Part 1

- Create Measurement report layout for ordered value levels
- Agreement analysis within the appraisers by using the Fleiss-Kappa statistic
- Derivation of the Kendall coefficient of the concordance
- Agreement analysis within the appraisers by using the Kendall concordance

MAIN TOPICS MINITAB TUTORIAL 23, part 2

- Derivation of the Kendall correlation coefficient
- Appraiser agreements analysis compared to the customer standard

MAIN TOPICS MINITAB TUTORIAL 23, Part 3

- Agreement analysis by using the Kendall correlation coefficient
- Graphical evaluation of Appraisers repeatability and reproducibility

**23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS)
**In the 23rd Minitab tutorial, we are in the final assembly department of Smartboard Company. Here in the early late and night shifts, all the individual skateboard components are assembled into a finished skateboard and subjected to a final visual surface inspection, before dispatch to the customer. Depending on their visual appearance, the skateboards receive corresponding integer quality grades from 1 to 5, without intermediate grades. Grade 1 indicates a damage-free very good skateboard, up to grade 5 for skateboards with very severe surface damage. One surface appraiser is available for visual quality control in each production shift, so that in three production shifts a total of three different surface appraisers rate the skateboards with quality grades 1 to 5. The core of this Minitab tutorial will be, to check whether all three appraisers have a high level of repeatability in their own assessments, and whether all three appraisers have a sufficiently identical understanding of customer quality. And finally, it is important to check whether the team of appraisers as a whole has the same understanding of quality as the customer. In contrast to the previous training unit, in which only the two binary answer options „good“ and „bad“ were possible, in this training unit we are dealing with an appraiser agreement analysis, in which five answers are possible, which then also have a different value in relation to each other. For example, a score of 1 for very good has a completely different qualitative value, than a score of 5 for poor.

Before we get into the attributive agreement analysis, we will first learn how to create a measurement protocol layout, if characteristic carriers are available in an ordered sequence of values. We will then move into analyzing the appraiser matches with the complete data set, and learn how to evaluate the match test within the appraisers. To assess the appraiser agreements, we will also learn how we can use the so-called Fleiss-Kappa statistics, in addition to the classic match rates in percent, in order to derive a statement about the expected future match rate, with a correspondingly defined probability of error. We will then get to know the very important so-called Kendall concordance coefficient, which – in contrast to the Kappa value – not only provides an absolute statement as to whether there is a match, but can also provide a statement about the severity of the wrong decisions through a relative consideration of the deviations. With this knowledge, we will then also be able to assess the agreement rate of the appraisers’ assessments in comparison to the customer standard, and also find out how we can use the corresponding quality criteria to work out how often the appraisers were of the same opinion, i.e., whether the appraisers have the same understanding of quality. In addition to the Kendall concordance coefficient, we will also get to know the so-called Kendall correlation coefficient, which helps us to obtain additional information about whether for example, a appraiser tends to make less demanding judgments and therefore undesirably classifies a skateboard, that is inadequate from the customer’s point of view as a very good skateboard.

MAIN TOPICS MINITAB TUTORIAL 23, Part 1

- Create Measurement report layout for ordered value levels
- Agreement analysis within the appraisers by using the Fleiss-Kappa statistic
- Derivation of the Kendall coefficient of the concordance
- Agreement analysis within the appraisers by using the Kendall concordance

MAIN TOPICS MINITAB TUTORIAL 23, part 2

- Derivation of the Kendall correlation coefficient
- Appraiser agreements analysis compared to the customer standard

MAIN TOPICS MINITAB TUTORIAL 23, Part 3

- Agreement analysis by using the Kendall correlation coefficient
- Graphical evaluation of Appraisers repeatability and reproducibility

**23 ATTRIBUTIVE AGREEMENT ANALYSIS (MORE THE 2 ATTRIBUTE LEVELS)
**In the 23rd Minitab tutorial, we are in the final assembly department of Smartboard Company. Here in the early late and night shifts, all the individual skateboard components are assembled into a finished skateboard and subjected to a final visual surface inspection, before dispatch to the customer. Depending on their visual appearance, the skateboards receive corresponding integer quality grades from 1 to 5, without intermediate grades. Grade 1 indicates a damage-free very good skateboard, up to grade 5 for skateboards with very severe surface damage. One surface appraiser is available for visual quality control in each production shift, so that in three production shifts a total of three different surface appraisers rate the skateboards with quality grades 1 to 5. The core of this Minitab tutorial will be, to check whether all three appraisers have a high level of repeatability in their own assessments, and whether all three appraisers have a sufficiently identical understanding of customer quality. And finally, it is important to check whether the team of appraisers as a whole has the same understanding of quality as the customer. In contrast to the previous training unit, in which only the two binary answer options „good“ and „bad“ were possible, in this training unit we are dealing with an appraiser agreement analysis, in which five answers are possible, which then also have a different value in relation to each other. For example, a score of 1 for very good has a completely different qualitative value, than a score of 5 for poor.

Before we get into the attributive agreement analysis, we will first learn how to create a measurement protocol layout, if characteristic carriers are available in an ordered sequence of values. We will then move into analyzing the appraiser matches with the complete data set, and learn how to evaluate the match test within the appraisers. To assess the appraiser agreements, we will also learn how we can use the so-called Fleiss-Kappa statistics, in addition to the classic match rates in percent, in order to derive a statement about the expected future match rate, with a correspondingly defined probability of error. We will then get to know the very important so-called Kendall concordance coefficient, which – in contrast to the Kappa value – not only provides an absolute statement as to whether there is a match, but can also provide a statement about the severity of the wrong decisions through a relative consideration of the deviations. With this knowledge, we will then also be able to assess the agreement rate of the appraisers’ assessments in comparison to the customer standard, and also find out how we can use the corresponding quality criteria to work out how often the appraisers were of the same opinion, i.e., whether the appraisers have the same understanding of quality. In addition to the Kendall concordance coefficient, we will also get to know the so-called Kendall correlation coefficient, which helps us to obtain additional information about whether for example, a appraiser tends to make less demanding judgments and therefore undesirably classifies a skateboard, that is inadequate from the customer’s point of view as a very good skateboard.

MAIN TOPICS MINITAB TUTORIAL 23, Part 1

- Create Measurement report layout for ordered value levels
- Agreement analysis within the appraisers by using the Fleiss-Kappa statistic
- Derivation of the Kendall coefficient of the concordance
- Agreement analysis within the appraisers by using the Kendall concordance

MAIN TOPICS MINITAB TUTORIAL 23, part 2

- Derivation of the Kendall correlation coefficient
- Appraiser agreements analysis compared to the customer standard

MAIN TOPICS MINITAB TUTORIAL 23, Part 3

- Agreement analysis by using the Kendall correlation coefficient
- Graphical evaluation of Appraisers repeatability and reproducibility

**24 CONTROL CHARTS, CONTINUOUS DATA
**In the 24th Minitab tutorial, we are in the die-casting production facility of Smartboard Company. Skateboard axles are manufactured here by using two die-casting systems. The central quality feature of the skateboard axles, is the axle strength. Smartboard Company’s customers require an average axle strength of 400 megapascals, plus minus five megapascals. As part of this training session, we will accompany the Smartboard Company quality team and experience, how the corresponding quality control charts are used to analyze, whether the die casting process can be classified as a stable process, in terms of the customer specification and in terms of the Automotive Industry Action Group standards, AIAG. We will see how the quality team first uses descriptive statistics to gain an initial impression of the average process situation and process variation, before the actual process stability analysis and then evaluates the quality of the process stability, by using corresponding quality control charts. We will learn that different types of quality control charts are used, depending on the scale level and subgroup size. In this context, we will first deal with the scattering behavior of individual values and mean values, by using a simple data set in order to better understand the so-called statistical parameter standard error. We will become familiar with the so-called quality individual chart, also known as the I chart, the mean value chart Xbar chart, and the standard deviation chart, known as the s-chart. For didactic reasons, we will also manually calculate the respective upper and lower control limits in the quality control charts step by step for better understanding, and compare them with the results in the output window. On this occasion, we will also use the so-called Range chart, also known as the R-chart. And also, manually derive the corresponding control limits for this. We will then learn in detail the eight most important control tests established in the industry, based on the Automotive Industry Action Group standards AIAG, which can help us to detect any process instabilities. We will experience, that if a quality control chart is selected incorrectly, there is always a risk that the control tests will react less sensitively to existing process instabilities. And in this context, we will also learn how an overall process can be divided into two sub-processes by using the useful function: Form subset of worksheet. With the knowledge we have learned so far, we will become familiar with the so-called moving range chart, commonly referred to as the M-R chart. And we will learn, that the combined individual and moving range chart, I-M-R chart, is always very useful when individual values are to be compared with each other that are not summarized in subgroups.

MAIN TOPICS MINITAB TUTORIAL 24, part 1

- Assessment of the data landscape using descriptive statistics
- Process stability versus process capability
- Structure of control charts using the example of the I-chart and Xbar chart
- Identify process instabilities using tests for exceptional conditions

MAIN TOPICS MINITAB TUTORIAL 24, part 2:

- Process analysis using the combined Xbar/R- chart
- Identification of process instabilities using the control tests according to AIAG
- Manual derivation of the control limits in the S-map
- Manual derivation of the control limits in the Xbar map
- Manual derivation of the control limits in the R-map
- Plant-related process analysis through partial sub set formation
- Integration of customer specification limits and process target limits
- Working with the time stamp in control charts
- Division of an overall process into several sub-processes
- Alignment of scale labels in quality control charts

MAIN TOPICS MINITAB TUTORIAL 24, Part 3:

- individual value chart I chart
- moving range chart MR chart
- Derivation of the control limits in the MR chart
- Derivation of the control limits in the I chart

**24 CONTROL CHARTS, CONTINUOUS DATA
**In the 24th Minitab tutorial, we are in the die-casting production facility of Smartboard Company. Skateboard axles are manufactured here by using two die-casting systems. The central quality feature of the skateboard axles, is the axle strength. Smartboard Company’s customers require an average axle strength of 400 megapascals, plus minus five megapascals. As part of this training session, we will accompany the Smartboard Company quality team and experience, how the corresponding quality control charts are used to analyze, whether the die casting process can be classified as a stable process, in terms of the customer specification and in terms of the Automotive Industry Action Group standards, AIAG. We will see how the quality team first uses descriptive statistics to gain an initial impression of the average process situation and process variation, before the actual process stability analysis and then evaluates the quality of the process stability, by using corresponding quality control charts. We will learn that different types of quality control charts are used, depending on the scale level and subgroup size. In this context, we will first deal with the scattering behavior of individual values and mean values, by using a simple data set in order to better understand the so-called statistical parameter standard error. We will become familiar with the so-called quality individual chart, also known as the I chart, the mean value chart Xbar chart, and the standard deviation chart, known as the s-chart. For didactic reasons, we will also manually calculate the respective upper and lower control limits in the quality control charts step by step for better understanding, and compare them with the results in the output window. On this occasion, we will also use the so-called Range chart, also known as the R-chart. And also, manually derive the corresponding control limits for this. We will then learn in detail the eight most important control tests established in the industry, based on the Automotive Industry Action Group standards AIAG, which can help us to detect any process instabilities. We will experience, that if a quality control chart is selected incorrectly, there is always a risk that the control tests will react less sensitively to existing process instabilities. And in this context, we will also learn how an overall process can be divided into two sub-processes by using the useful function: Form subset of worksheet. With the knowledge we have learned so far, we will become familiar with the so-called moving range chart, commonly referred to as the M-R chart. And we will learn, that the combined individual and moving range chart, I-M-R chart, is always very useful when individual values are to be compared with each other that are not summarized in subgroups.

MAIN TOPICS MINITAB TUTORIAL 24, part 1

- Assessment of the data landscape using descriptive statistics
- Process stability versus process capability
- Structure of control charts using the example of the I-chart and Xbar chart
- Identify process instabilities using tests for exceptional conditions

MAIN TOPICS MINITAB TUTORIAL 24, part 2:

- Process analysis using the combined Xbar/R- chart
- Identification of process instabilities using the control tests according to AIAG
- Manual derivation of the control limits in the S-map
- Manual derivation of the control limits in the Xbar map
- Manual derivation of the control limits in the R-map
- Plant-related process analysis through partial sub set formation
- Integration of customer specification limits and process target limits
- Working with the time stamp in control charts
- Division of an overall process into several sub-processes
- Alignment of scale labels in quality control charts

MAIN TOPICS MINITAB TUTORIAL 24, Part 3:

- individual value chart I chart
- moving range chart MR chart
- Derivation of the control limits in the MR chart
- Derivation of the control limits in the I chart

**24 CONTROL CHARTS, CONTINUOUS DATA
**In the 24th Minitab tutorial, we are in the die-casting production facility of Smartboard Company. Skateboard axles are manufactured here by using two die-casting systems. The central quality feature of the skateboard axles, is the axle strength. Smartboard Company’s customers require an average axle strength of 400 megapascals, plus minus five megapascals. As part of this training session, we will accompany the Smartboard Company quality team and experience, how the corresponding quality control charts are used to analyze, whether the die casting process can be classified as a stable process, in terms of the customer specification and in terms of the Automotive Industry Action Group standards, AIAG. We will see how the quality team first uses descriptive statistics to gain an initial impression of the average process situation and process variation, before the actual process stability analysis and then evaluates the quality of the process stability, by using corresponding quality control charts. We will learn that different types of quality control charts are used, depending on the scale level and subgroup size. In this context, we will first deal with the scattering behavior of individual values and mean values, by using a simple data set in order to better understand the so-called statistical parameter standard error. We will become familiar with the so-called quality individual chart, also known as the I chart, the mean value chart Xbar chart, and the standard deviation chart, known as the s-chart. For didactic reasons, we will also manually calculate the respective upper and lower control limits in the quality control charts step by step for better understanding, and compare them with the results in the output window. On this occasion, we will also use the so-called Range chart, also known as the R-chart. And also, manually derive the corresponding control limits for this. We will then learn in detail the eight most important control tests established in the industry, based on the Automotive Industry Action Group standards AIAG, which can help us to detect any process instabilities. We will experience, that if a quality control chart is selected incorrectly, there is always a risk that the control tests will react less sensitively to existing process instabilities. And in this context, we will also learn how an overall process can be divided into two sub-processes by using the useful function: Form subset of worksheet. With the knowledge we have learned so far, we will become familiar with the so-called moving range chart, commonly referred to as the M-R chart. And we will learn, that the combined individual and moving range chart, I-M-R chart, is always very useful when individual values are to be compared with each other that are not summarized in subgroups.

MAIN TOPICS MINITAB TUTORIAL 24, part 1

- Assessment of the data landscape using descriptive statistics
- Process stability versus process capability
- Structure of control charts using the example of the I-chart and Xbar chart
- Identify process instabilities using tests for exceptional conditions

MAIN TOPICS MINITAB TUTORIAL 24, part 2:

- Process analysis using the combined Xbar/R- chart
- Identification of process instabilities using the control tests according to AIAG
- Manual derivation of the control limits in the S-map
- Manual derivation of the control limits in the Xbar map
- Manual derivation of the control limits in the R-map
- Plant-related process analysis through partial sub set formation
- Integration of customer specification limits and process target limits
- Working with the time stamp in control charts
- Division of an overall process into several sub-processes
- Alignment of scale labels in quality control charts

MAIN TOPICS MINITAB TUTORIAL 24, Part 3:

- individual value chart I chart
- moving range chart MR chart
- Derivation of the control limits in the MR chart
- Derivation of the control limits in the I chart

**25 PROCESS STABILITY ATTRIBUTIVE DATA: P-, NP-, P`- CHART
**In the 25th Minitab tutorial, we are back in the final assembly department of Smartboard Company. Here, in the early late and night shifts, all the individual skateboard components are assembled into a finished skateboard, and subjected to a final automatic surface inspection before being shipped to the customer. Skateboards without surface damage are classified in the attributive category „good“, and can be sold. Skateboards with surface damage are classified in the attributive category „bad“, and either undergo cost-intensive reworking or in the worst case, have to be scrapped. The core topic in this training unit will be to learn how process stability can be investigated on the basis of this categorical data. To this end, we will first learn how the number of defective skateboards can be displayed chronologically in the form of a defect rate, by using a suitable quality control chart. We will then use the correct choice of quality control chart to assess whether the skateboard assembly process can be classified as a stable process from a qualitative perspective. In this context we will understand, that if our process data is only available in two attributive categories, as in this case in the attributive categories good part and bad part, quality control charts that take into account the laws of binomial distribution are always suitable. In this context, we will get to know the so-called quality control chart, P-chart. Before we can create the P-chart, we will first carry out a so-called P-chart diagnosis, to ensure that our data follows the laws of binomial distribution sufficiently well. Here we examine the important parameters, such as overdispersion and underdispersion, which provide us with information on how much the scattering behavior of our actual data landscape deviates from the scattering behavior of a theoretically ideal binomial distribution, and whether this deviation is still acceptable. With the knowledge we have gained up to this point, we will be able to decide in the context of the corresponding AIAG standard specifications, whether we should actually continue to work with the P-chart or whether we should use a modified P-chart, the so-called P-prime chart according to Laney due to an inadmissible over- or under-dispersion. In addition to the P- chart at the end of this training unit we will also get to know the so-called useful n-p- chart, which is also able to clearly and chronologically depict absolute proportions of defective units instead of relative proportions.

MAIN TOPICS MINITAB TUTORIAL

- p- chart: Diagnosis
- p- chart: Structure and principle
- p- chart analysis
- Working with identification variables
- Manual derivation of the upper and lower control limits in the p- chart
- np- chart: structure and principle
- np- chart analysis
- p`- chart according to Laney

**26 PROCESS STABILITY ATTRIBUTIVE DATA: U-, C- CHART
**In the 26th Minitab tutorial, we are in the final assembly department of Smartboard Company. Here, all individual skateboard components are currently assembled into a finished skateboard in the early late and night shifts, and then subjected to an automatic surface inspection before dispatch to the customer in order to check that no surface damage in the form of scratches, was caused during assembly. In the past, skateboards without surface scratches were classified in the „good“ quality category and could be released to customers. Accordingly, skateboards with surface damage were classified in the „poor“ quality category, and therefore either had to be reworked at great expense or, in the worst case scrapped. In order to be able to record the extent of surface damage in a more differentiated way in the future, the number of surface scratches has also been recorded by the automatic surface inspection system. In this Minitab tutorial, we will therefore learn to map the number of scratches detected on each product using a quality control chart, and to analyze whether the assembly process can be classified as a stable process in terms of the number of scratches.

In contrast to the training unit, in which we used the p and np chart, to set the number of bad parts in relation to the respective sample subgroup, and were able to carry out our stability analysis using the statistical laws of binomial distribution, this practical scenario now deals with the case, where the number of events on a product, in our case the number of scratches on a skateboard, is the focus of our stability analysis. We will learn that we are then dealing with Poisson distributed quality data. The central learning objective is to map the frequency of events, in our case the number of scratches on the skateboards by using a corresponding quality control chart and to analyze the assembly process with regard to its process stability. In this context, we will get to know the so-called u-chart, and c-chart, and learn how we can use a corresponding u-chart diagnosis, to check whether the scattering behavior of our data set also follows the laws of Poisson distribution sufficiently well. We will also understand how we can manually estimate the corresponding control limits according to the AIAG standard specifications. Based on our analysis results, we will then be able to derive appropriate improvement measures to improve process stability. We will then carry out another process stability analysis, based on the improved process, and compare the process stability of our improved process with the process stability of the original unimproved process. We will learn how to use the useful „Stages“ option, to divide an existing overall process into sub-processes, and thus also obtain the correct sub-process-related control limits. In the last part of this Minitab tutorial, we will also get to know the very useful so-called c- chart, on another data set consisting of constant subgroup sizes and understand, that the c- chart in contrast to the u- chart, is able to map the absolute event frequencies per subgroup.

MAIN TOPICS MINITAB TUTORIAL

- u- chart: Fundamentals
- u- chart diagnosis
- u- chart analysis
- Manual derivation of the control limits in the u- chart
- Division of the overall process into two sub-processes using the u- chart
- c- chart principle
- c- chart analysis
- Manual derivation of the control limits in the c- chart

**27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED
**In the 27th Minitab tutorial, we will accompany the quality team of Smartboard Company as they analyze the process capability of the die casting process for the production of skateboard axles as part of a quality improvement process. Before we get into the actual process capability analysis, we will first see which work steps are required in advance. We will see how the quality team uses the probability plot, and the associated Anderson- Darling test, to work out whether the sample data set available for the process capability follows the laws of normal distribution. We will learn how to use appropriate quality control charts to check whether the die casting process provides the necessary process stability, in the run-up to the actual process capability analysis. The core of this Minitab tutorial will be to get to know all relevant capability indicators within the framework of the process capability analysis, which relate to the overall process capability on the one hand, and the potential process capability on the other. In particular, we will get to know the central capability indicators CP, CPK, PP, or PPK, but also work on the so-called Taguchi Capability Index CPM, which belongs to the so-called second-generation capability indicators. We will use simple calculation examples to calculate the most important key figures step by step including manually, in order to understand how the capability figures shown in the output window are created in the first place. We will also understand what the z-benchmark performance indicator means, and how it relates to the sigma level. Using other performance indicators such as observed performance or expected performance, we will then also be able to assess process capability both, within and between subgroups. We will get to know the very useful Capability Six Pack function, which will help us very quickly, especially in turbulent day-to-day business to calculate all the necessary analyses, which are also required in advance of the actual process capability analysis in just a few steps. Based on the knowledge we have learned and the analysis results available, we will then be able to assess the process capability of our die casting process in a differentiated manner, and thus derive appropriate measures to improve process capability. Once the improvement measures have been implemented, we will finally carry out a new process capability analysis, and use the capability indicators to compare the improved process with the original unimproved process in detail.

MAIN TOPICS MINITAB TUTORIAL 27, Part 1

- Fundamentals of standard regulations regarding process capability
- Basic principle and logic of the process capability definition
- Process yields versus process capability
- Process capability indices: PP, PPL, PPU, PPk
- Process capability for centered processes
- Process capability for non-centered processes

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 2

- Preparatory work for the process capability analysis
- Descriptive statistics as part of a process capability analysis
- Test for normal distribution, Anderson-Darling test
- Derivation of the probability plot from the probability function
- Process stability analysis: Xbar and R-chart
- Estimation methods for determining the standard deviation
- Key figures for overall process capability
- Key figures for potential process capability
- Total standard deviation vs. standard deviation within the subgroups
- Sigma level vs. process capability level
- Observed performance, expected performance overall, expected performance within
- Manual determination of the standard deviation for the potential process capability

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 3

- Manual derivation of key performance indicators
- Manual derivation of the standard deviation with the R-bar method
- Manual derivation of the „summarized standard deviation“
- Benchmark-Z (Sigma level)
- Working with the dot plot as part of a process capability analysis
- Extraction of data information as part of a process capability analysis
- Working with the „Capability Six Pack“ option

**27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED
**In the 27th Minitab tutorial, we will accompany the quality team of Smartboard Company as they analyze the process capability of the die casting process for the production of skateboard axles as part of a quality improvement process. Before we get into the actual process capability analysis, we will first see which work steps are required in advance. We will see how the quality team uses the probability plot, and the associated Anderson- Darling test, to work out whether the sample data set available for the process capability follows the laws of normal distribution. We will learn how to use appropriate quality control charts to check whether the die casting process provides the necessary process stability, in the run-up to the actual process capability analysis. The core of this Minitab tutorial will be to get to know all relevant capability indicators within the framework of the process capability analysis, which relate to the overall process capability on the one hand, and the potential process capability on the other. In particular, we will get to know the central capability indicators CP, CPK, PP, or PPK, but also work on the so-called Taguchi Capability Index CPM, which belongs to the so-called second-generation capability indicators. We will use simple calculation examples to calculate the most important key figures step by step including manually, in order to understand how the capability figures shown in the output window are created in the first place. We will also understand what the z-benchmark performance indicator means, and how it relates to the sigma level. Using other performance indicators such as observed performance or expected performance, we will then also be able to assess process capability both, within and between subgroups. We will get to know the very useful Capability Six Pack function, which will help us very quickly, especially in turbulent day-to-day business to calculate all the necessary analyses, which are also required in advance of the actual process capability analysis in just a few steps. Based on the knowledge we have learned and the analysis results available, we will then be able to assess the process capability of our die casting process in a differentiated manner, and thus derive appropriate measures to improve process capability. Once the improvement measures have been implemented, we will finally carry out a new process capability analysis, and use the capability indicators to compare the improved process with the original unimproved process in detail.

MAIN TOPICS MINITAB TUTORIAL 27, Part 1

- Fundamentals of standard regulations regarding process capability
- Basic principle and logic of the process capability definition
- Process yields versus process capability
- Process capability indices: PP, PPL, PPU, PPk
- Process capability for centered processes
- Process capability for non-centered processes

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 2

- Preparatory work for the process capability analysis
- Descriptive statistics as part of a process capability analysis
- Test for normal distribution, Anderson-Darling test
- Derivation of the probability plot from the probability function
- Process stability analysis: Xbar and R-chart
- Estimation methods for determining the standard deviation
- Key figures for overall process capability
- Key figures for potential process capability
- Total standard deviation vs. standard deviation within the subgroups
- Sigma level vs. process capability level
- Observed performance, expected performance overall, expected performance within
- Manual determination of the standard deviation for the potential process capability

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 3

- Manual derivation of key performance indicators
- Manual derivation of the standard deviation with the R-bar method
- Manual derivation of the „summarized standard deviation“
- Benchmark-Z (Sigma level)
- Working with the dot plot as part of a process capability analysis
- Extraction of data information as part of a process capability analysis
- Working with the „Capability Six Pack“ option

**27 PROCESS CAPABILITY, NORMALLY DISTRIBUTED
**In the 27th Minitab tutorial, we will accompany the quality team of Smartboard Company as they analyze the process capability of the die casting process for the production of skateboard axles as part of a quality improvement process. Before we get into the actual process capability analysis, we will first see which work steps are required in advance. We will see how the quality team uses the probability plot, and the associated Anderson- Darling test, to work out whether the sample data set available for the process capability follows the laws of normal distribution. We will learn how to use appropriate quality control charts to check whether the die casting process provides the necessary process stability, in the run-up to the actual process capability analysis. The core of this Minitab tutorial will be to get to know all relevant capability indicators within the framework of the process capability analysis, which relate to the overall process capability on the one hand, and the potential process capability on the other. In particular, we will get to know the central capability indicators CP, CPK, PP, or PPK, but also work on the so-called Taguchi Capability Index CPM, which belongs to the so-called second-generation capability indicators. We will use simple calculation examples to calculate the most important key figures step by step including manually, in order to understand how the capability figures shown in the output window are created in the first place. We will also understand what the z-benchmark performance indicator means, and how it relates to the sigma level. Using other performance indicators such as observed performance or expected performance, we will then also be able to assess process capability both, within and between subgroups. We will get to know the very useful Capability Six Pack function, which will help us very quickly, especially in turbulent day-to-day business to calculate all the necessary analyses, which are also required in advance of the actual process capability analysis in just a few steps. Based on the knowledge we have learned and the analysis results available, we will then be able to assess the process capability of our die casting process in a differentiated manner, and thus derive appropriate measures to improve process capability. Once the improvement measures have been implemented, we will finally carry out a new process capability analysis, and use the capability indicators to compare the improved process with the original unimproved process in detail.

MAIN TOPICS MINITAB TUTORIAL 27, Part 1

- Fundamentals of standard regulations regarding process capability
- Basic principle and logic of the process capability definition
- Process yields versus process capability
- Process capability indices: PP, PPL, PPU, PPk
- Process capability for centered processes
- Process capability for non-centered processes

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 2

- Preparatory work for the process capability analysis
- Descriptive statistics as part of a process capability analysis
- Test for normal distribution, Anderson-Darling test
- Derivation of the probability plot from the probability function
- Process stability analysis: Xbar and R-chart
- Estimation methods for determining the standard deviation
- Key figures for overall process capability
- Key figures for potential process capability
- Total standard deviation vs. standard deviation within the subgroups
- Sigma level vs. process capability level
- Observed performance, expected performance overall, expected performance within
- Manual determination of the standard deviation for the potential process capability

MAIN TOPICS MINITAB TUTORIAL Minitab Training 27, Part 3

- Manual derivation of key performance indicators
- Manual derivation of the standard deviation with the R-bar method
- Manual derivation of the „summarized standard deviation“
- Benchmark-Z (Sigma level)
- Working with the dot plot as part of a process capability analysis
- Extraction of data information as part of a process capability analysis
- Working with the „Capability Six Pack“ option

**28 PROCESS CAPABILITY, NOT NORMALLY DISTRIBUTED
**In the 28th Minitab tutorial, we visit the last process step of the die-casting process for the production of skateboard axles. The production of skateboard Axles at Smartboard Company is currently carried out using the die-casting process. In order to achieve the strength values required by the customer in the skateboard axles, they are subjected to a heat treatment process at the end of the die-casting process. For this purpose, the skateboard axles are brought to heat treatment temperature in a continuous furnace, and then cooled to room temperature in a water bath. During the rapid cooling process, undesirable changes in the shape of the skateboard Axles can occur, which are still accepted by the customer to a certain extent and must therefore not exceed a certain value in accordance with customer requirements. The core of this Minitab tutorial unit is to find out whether the heat transformation process has a required process performance PPk of at least 1.33, in relation to the maximum permissible shape change. However, we will find out right at the beginning of the training unit, that our data in this sample data set does not follow the laws of normal distribution. A central topic in this training unit will therefore be, to first work out which distribution laws our non-normally distributed data set most closely follows. In this context, we will get to know the very helpful function “ Individual Distribution Identification“, in order to be able to assess an existing non-normally distributed data landscape by means of a corresponding so-called mathematical data transformation, by using the already known performance indicators. We will get to know all the transformation functions that are relevant in practice, and understand the system and criteria that can be used to determine the appropriate transformation function, for the respective practical scenario. As part of our process capability analysis of non-normally distributed process data, we will see how the quality team uses the useful and efficient „Capability Six Pack“ option, to efficiently evaluate the test for normal distribution according to Anderson Darling, and the stability analysis using the corresponding control charts in a single step, in addition to the actual capability analysis. This means that we can use the available results based on the necessary data transformation of the non-normally distributed data set, to assess whether the process performance required by the customer is achieved, and how high the error or process yield is based on the available process performance.

MAIN TOPICS MINITAB TUTORIAL 28

- Non-normally distributed process data, fundamentals
- Analysis of process data using descriptive statistics
- Boxplot Analysis
- Test for normal distribution according to Anderson Darling
- Identification of the distribution type by using data transformation
- Checking the process stability of non-normally distributed data
- Use of the „Capability Six Pack“ option
- Analysis of the process capability based on capability plot and process yield

**29 PROCESS CAPABILITY, BINOMIALLY DISTRIBUTED
**In the 29th Minitab tutorial, we take a closer look at the assembly process in the final assembly department at Smartboard Company. As we already know all the individual skateboard components are assembled into a finished skateboard in this department, and then subjected to an automatic surface inspection before being shipped to the customer. Skateboards without surface damage are classified in the attributive category „good“, and can be sold to the customers. Skateboards with surface damage are classified in the attributive category „bad“, and must either be reworked at great expense or, in the worst case scrapped. The special feature of this process capability analysis is that we are no longer dealing with normally distributed data, as our quality attribute is present in the two categories good and bad, and therefore the statistical laws of the so-called binomial distribution must also be taken into account. In this Minitab tutorial, we will therefore use tools for our process capability analysis that take the laws of binomial distribution into account. We will learn how to carry out the necessary tests with our data set in advance of the actual process capability analysis of binomially distributed data, in order to check whether the laws of binomial distribution are actually observed by our data set. With these findings, we can then carry out the necessary process stability analysis as a preliminary stage to the actual process capability analysis, in order to ensure that the necessary process stability is actually guaranteed. For this purpose, we will use the corresponding control charts, such as the p-chart, and the np-chart, which take into account the laws of binomial distribution. Finally, we can also assess the process performance of binomially distributed process data. The parameters such as the so-called cumulative proportions of defective units, and the so-called rate of defective units, will play an important role here. And we will be able to use a graphical derivation, to understand the sigma level in our binomially distributed process data landscape.

MAIN TOPICS MINITAB TUTORIAL 29

- Scale levels, fundamentals
- Process stability analysis of binomially distributed data
- Report on the process capability of binomially distributed data
- p- chart and np- chart in the context of binomially distributed process data
- Cumulative proportions of defective
- Rate of defective units
- Graphical derivation of the sigma level of binomially distributed process data

**30 PROCESS CAPABILITY, POISSON DISTRIBUTED
**In the 30th Minitab tutorial, we are still in the final assembly department of Smartboard Company. Here, all individual skateboard components are assembled into a finished skateboard and finally subjected to an automatic surface inspection before shipping to the customer to ensure that no undesirable surface damage e.g. in the form of scratches, has occurred during final assembly, which could lead to unwanted customer complaints. In the past skateboards without surface scratches were classified in the attributive quality category „good part“, and could be delivered to customers. Skateboards with surface damage, on the other hand were classified in the attributive quality category „bad part“, and either reworked at great expense or even scrapped. In order to record the severity of the surface damage on the skateboards in even greater detail, the number of scratches per skateboard has also been recorded by the automatic surface inspection system. The focus of this training unit is now to analyze whether the assembly process can be classified as a capable process in terms of the number of scratches. In a previous Minitab tutorial, in which the surface inspection system had only classified the skateboards into two categories good and bad, we were able to carry out all the necessary analysis steps based on the binomial distribution in order to evaluate the process capability. In this practical scenario, the focus is no longer on the number of defects parts per subgroup, but on the number of defects per skateboard and subgroup. We will therefore learn that in this case the statistical laws of the Poisson distribution, rather than the binomial distribution, apply. The focus of this tutorial is therefore on process capability analysis, based on the laws of Poisson distribution.

We will learn that process stability is also an important prerequisite for a capability analysis based on Poisson distributed characteristics, in order to be able to correctly evaluate the process capability. In this context, we will learn how to perform the Poisson distribution test to ensure that our data actually follows the laws of Poisson distribution. We will use the so-called quality control chart u-chart, and the corresponding control tests to quickly work out whether Smartboard Company’s assembly process also has the required process stability as a preliminary stage to process capability, under these conditions. In addition to the important „Poisson plot“, we will also get to know the informative “ Cumulative DPU plot“ diagram, and interpret both. We will then move into the actual process capability analysis of our Poisson distributed data, to obtain the required capability metrics to assess process capability. In this context, we will get to know a number of important parameters, such as the lower and upper confidence interval limit, or the key figure DPU, in order to ultimately be able to assess the process performance in the necessary depth. As part of our analysis, we will also get to know a very efficient option for generating all the necessary analysis steps and information for assessing the quality of the Poisson distribution, process stability and process capability in one step, in the form of a process capability report. With all the necessary information from this process capability report, we can reliably derive whether the process capability of our Poisson distributed data can be classified as a capable, or non-capable process in relation to the customer’s target specification.

MAIN TOPICS MINITAB TUTORIAL 30

- Process stability tests in the run-up to capability analysis
- u- chart, principle
- Accumulated DPU
- Poisson plot for analyzing Poisson distribution
- Cumulative DPU plot
- Statistical analysis of the process capability of poisson distributed data
- Lower and upper CI-limit
- DPU: mean, min/max and target

**31 DOE FULL FACTORIAL, 3 PREDICTORS
**In this 31st Minitab tutorial, we are in the axle material development department of Smartboard Company. We will accompany the development team how they reduce the brittleness in the skateboard Axles to a minimum, with the help of the so-called DOE, statistical design of experiment. In principle the higher the brittleness of the skateboard Axles the greater the risk that even the slightest impact loads can cause the Axles to break. The possible influencing factors that could have a potential impact on the brittleness of the Axles were determined by the development team as part of an Ishikawa analysis. The parameters chromium content in the axles, annealing temperature during heat treatment and the type of heat treatment furnace were identified as potential influencing factors. In view of the fact, that the development team is under time pressure and must also keep the scope of experimental tests on the production line to a minimum, so-called statistical design of experiment should be used to determine the optimum settings for the three influencing variables mentioned with little experimental effort, with the aim of reducing axis embrittlement to a minimum. This statistical design of experiments, often abbreviated to DOE is a very important and useful tool in the Six Sigma methodology, which deals with the statistical planning, execution, analysis and interpretation of cause-and-effect relationships. In order to be able to deal with this complex subject area in the necessary depth, this training unit is segmented into four parts. In the first part of this training unit, the fundamentals and basic ideas of statistical experimental design are explained in order to provide a good understanding of the most common of experimental design types. In particular we will get to know the important terms, such as center points, replications, and block variables, and understand why a discriminatory power analysis is always recommended for determining the number of required experimental replications. Well equipped with this basic knowledge, we will then enter the second part of our training unit, and get to know how to set up and analyze the appropriate experimental design, in our case the so-called factorial experimental design. We will see here, that it is very important to also carry out the test for normal distribution according to Anderson Darling, and we will also be able to understand why the so-called table of coded coefficients, plays a very useful role in the context in the DOE method. Another central topic in this part will be the so-called main effect plots and interaction effect plots, which we will construct manually step by step for didactic reasons, in order to be able to interpret the factor plots displayed in the output window in detail, and on this occasion, we will also get to know the useful Layout Tool function. With the knowledge gained from the first and second parts of this Minitab tutorial, we will then be well equipped to focus on the quality of our DOE model in the third part, for example to be able to evaluate the quality of our variance model by using the corresponding coefficients of determination, such as R-squared, R-squared adjusted, and R-squared predicted. In this context, we will also look at the associated regression equation in uncoded units, which was generated as part of the variance analysis for our DOE model, and basically represents the foundation for the upcoming response optimization. In this context, the so-called alias structure will play an important role which we will examine and interpret in detail. We will also get to know and discuss the useful so-called Pareto diagram of standardized effects, in order to be able to efficiently distinguish graphically significant terms from non-significant terms. We will learn, that the so-called residual scatter which cannot be described with our DOE model should follow the laws of normal distribution. For this purpose, we will work with the probability plot of the normal distribution and use all the important representations in the context of residual analysis, such as residuals versus fits, histogram of residuals, and residuals versus order. With this information regarding the quality of our DOE model, we can move on to the fourth and final part of this Minitab tutorial in order to start the Response optimization with the final DOE model. For this purpose, we will use the very helpful interactive response optimization window to set the three parameters in such a way that the undesirable embrittlement components in the skateboard axles are reduced to a minimum. As part of this response optimization, we will also understand for example, the difference between so-called individual and composite desirability. In particular, we will also understand how the corresponding confidence or prediction intervals are to be interpreted in the context of our Response optimization. So at the end of our multi-part Minitab tutorial session, we will be able to provide the management of Smartboard Company with a 95% reliable recommendation, on how the three influencing variables should be set in concrete terms so that the material brittleness in the skateboard axles after heat treatment, is as low as possible.

MAIN TOPICS MINITAB TUTORIAL 31, Part 1

- Overview of the most common experimental DOE design types
- Full-factorial, fractional-factorial and response surface design types
- Screening, mixture and Taguchi design types
- 2-Level factorial experimental design with standard generators
- Resolution levels of available factorial experimental designs
- Center points and replications
- Discrimination power analysis to determine the number of replications
- Standard order vs. run order
- Setting block variables

MAIN TOPICS MINITAB TUTORIAL 31, part 2

- Analysis of factorial experimental designs
- Test for normal distribution of the DOE response variable
- Analysis of the non-descriptive residual scatter using the 4 in1 residual plot
- Evaluation of the „Coded coefficients“ table
- Main effect plot and Interaction plot
- Dual interaction diagrams
- Construction of triple interaction diagrams
- Unstack response variables according to factor levels
- Working with the layout tool

MAIN TOPICS MINITAB TUTORIAL 31, Part 3

- DOE model coefficients R- sq, R- sq(adj) and R-sq (prog)
- Analysis of variance within the framework of the DOE
- Coded coefficients
- Regression equation in non-coded units
- Alias structures
- Pareto diagram of the standardized effects
- Probability plot of the normal distribution
- Residuals versus fit
- Histogram of the residuals
- Residuals versus order

MAIN TOPICS MINITAB TUTORIAL 31, part 4

- Response optimization
- Multiple Response Prediction Analysis
- Response variable goal: Minimize, target, maximize
- Working with interactive response optimization window
- Individual desirability
- composite desirability
- Confidence interval in the context of Response optimization
- Prediction interval in the context of Response optimization

**31 DOE FULL FACTORIAL, 3 PREDICTORS
**In this 31st Minitab tutorial, we are in the axle material development department of Smartboard Company. We will accompany the development team how they reduce the brittleness in the skateboard Axles to a minimum, with the help of the so-called DOE, statistical design of experiment. In principle the higher the brittleness of the skateboard Axles the greater the risk that even the slightest impact loads can cause the Axles to break. The possible influencing factors that could have a potential impact on the brittleness of the Axles were determined by the development team as part of an Ishikawa analysis. The parameters chromium content in the axles, annealing temperature during heat treatment and the type of heat treatment furnace were identified as potential influencing factors. In view of the fact, that the development team is under time pressure and must also keep the scope of experimental tests on the production line to a minimum, so-called statistical design of experiment should be used to determine the optimum settings for the three influencing variables mentioned with little experimental effort, with the aim of reducing axis embrittlement to a minimum. This statistical design of experiments, often abbreviated to DOE is a very important and useful tool in the Six Sigma methodology, which deals with the statistical planning, execution, analysis and interpretation of cause-and-effect relationships. In order to be able to deal with this complex subject area in the necessary depth, this training unit is segmented into four parts. In the first part of this training unit, the fundamentals and basic ideas of statistical experimental design are explained in order to provide a good understanding of the most common of experimental design types. In particular we will get to know the important terms, such as center points, replications, and block variables, and understand why a discriminatory power analysis is always recommended for determining the number of required experimental replications. Well equipped with this basic knowledge, we will then enter the second part of our training unit, and get to know how to set up and analyze the appropriate experimental design, in our case the so-called factorial experimental design. We will see here, that it is very important to also carry out the test for normal distribution according to Anderson Darling, and we will also be able to understand why the so-called table of coded coefficients, plays a very useful role in the context in the DOE method. Another central topic in this part will be the so-called main effect plots and interaction effect plots, which we will construct manually step by step for didactic reasons, in order to be able to interpret the factor plots displayed in the output window in detail, and on this occasion, we will also get to know the useful Layout Tool function. With the knowledge gained from the first and second parts of this Minitab tutorial, we will then be well equipped to focus on the quality of our DOE model in the third part, for example to be able to evaluate the quality of our variance model by using the corresponding coefficients of determination, such as R-squared, R-squared adjusted, and R-squared predicted. In this context, we will also look at the associated regression equation in uncoded units, which was generated as part of the variance analysis for our DOE model, and basically represents the foundation for the upcoming response optimization. In this context, the so-called alias structure will play an important role which we will examine and interpret in detail. We will also get to know and discuss the useful so-called Pareto diagram of standardized effects, in order to be able to efficiently distinguish graphically significant terms from non-significant terms. We will learn, that the so-called residual scatter which cannot be described with our DOE model should follow the laws of normal distribution. For this purpose, we will work with the probability plot of the normal distribution and use all the important representations in the context of residual analysis, such as residuals versus fits, histogram of residuals, and residuals versus order. With this information regarding the quality of our DOE model, we can move on to the fourth and final part of this Minitab tutorial in order to start the Response optimization with the final DOE model. For this purpose, we will use the very helpful interactive response optimization window to set the three parameters in such a way that the undesirable embrittlement components in the skateboard axles are reduced to a minimum. As part of this response optimization, we will also understand for example, the difference between so-called individual and composite desirability. In particular, we will also understand how the corresponding confidence or prediction intervals are to be interpreted in the context of our Response optimization. So at the end of our multi-part Minitab tutorial session, we will be able to provide the management of Smartboard Company with a 95% reliable recommendation, on how the three influencing variables should be set in concrete terms so that the material brittleness in the skateboard axles after heat treatment, is as low as possible.

MAIN TOPICS MINITAB TUTORIAL 31, Part 1

- Overview of the most common experimental DOE design types
- Full-factorial, fractional-factorial and response surface design types
- Screening, mixture and Taguchi design types
- 2-Level factorial experimental design with standard generators
- Resolution levels of available factorial experimental designs
- Center points and replications
- Discrimination power analysis to determine the number of replications
- Standard order vs. run order
- Setting block variables

MAIN TOPICS MINITAB TUTORIAL 31, part 2

- Analysis of factorial experimental designs
- Test for normal distribution of the DOE response variable
- Analysis of the non-descriptive residual scatter using the 4 in1 residual plot
- Evaluation of the „Coded coefficients“ table
- Main effect plot and Interaction plot
- Dual interaction diagrams
- Construction of triple interaction diagrams
- Unstack response variables according to factor levels
- Working with the layout tool

MAIN TOPICS MINITAB TUTORIAL 31, Part 3

- DOE model coefficients R- sq, R- sq(adj) and R-sq (prog)
- Analysis of variance within the framework of the DOE
- Coded coefficients
- Regression equation in non-coded units
- Alias structures
- Pareto diagram of the standardized effects
- Probability plot of the normal distribution
- Residuals versus fit
- Histogram of the residuals
- Residuals versus order

MAIN TOPICS MINITAB TUTORIAL 31, part 4

- Response optimization
- Multiple Response Prediction Analysis
- Response variable goal: Minimize, target, maximize
- Working with interactive response optimization window
- Individual desirability
- composite desirability
- Confidence interval in the context of Response optimization
- Prediction interval in the context of Response optimization

**31 DOE FULL FACTORIAL, 3 PREDICTORS
**In this 31st Minitab tutorial, we are in the axle material development department of Smartboard Company. We will accompany the development team how they reduce the brittleness in the skateboard Axles to a minimum, with the help of the so-called DOE, statistical design of experiment. In principle the higher the brittleness of the skateboard Axles the greater the risk that even the slightest impact loads can cause the Axles to break. The possible influencing factors that could have a potential impact on the brittleness of the Axles were determined by the development team as part of an Ishikawa analysis. The parameters chromium content in the axles, annealing temperature during heat treatment and the type of heat treatment furnace were identified as potential influencing factors. In view of the fact, that the development team is under time pressure and must also keep the scope of experimental tests on the production line to a minimum, so-called statistical design of experiment should be used to determine the optimum settings for the three influencing variables mentioned with little experimental effort, with the aim of reducing axis embrittlement to a minimum. This statistical design of experiments, often abbreviated to DOE is a very important and useful tool in the Six Sigma methodology, which deals with the statistical planning, execution, analysis and interpretation of cause-and-effect relationships. In order to be able to deal with this complex subject area in the necessary depth, this training unit is segmented into four parts. In the first part of this training unit, the fundamentals and basic ideas of statistical experimental design are explained in order to provide a good understanding of the most common of experimental design types. In particular we will get to know the important terms, such as center points, replications, and block variables, and understand why a discriminatory power analysis is always recommended for determining the number of required experimental replications. Well equipped with this basic knowledge, we will then enter the second part of our training unit, and get to know how to set up and analyze the appropriate experimental design, in our case the so-called factorial experimental design. We will see here, that it is very important to also carry out the test for normal distribution according to Anderson Darling, and we will also be able to understand why the so-called table of coded coefficients, plays a very useful role in the context in the DOE method. Another central topic in this part will be the so-called main effect plots and interaction effect plots, which we will construct manually step by step for didactic reasons, in order to be able to interpret the factor plots displayed in the output window in detail, and on this occasion, we will also get to know the useful Layout Tool function. With the knowledge gained from the first and second parts of this Minitab tutorial, we will then be well equipped to focus on the quality of our DOE model in the third part, for example to be able to evaluate the quality of our variance model by using the corresponding coefficients of determination, such as R-squared, R-squared adjusted, and R-squared predicted. In this context, we will also look at the associated regression equation in uncoded units, which was generated as part of the variance analysis for our DOE model, and basically represents the foundation for the upcoming response optimization. In this context, the so-called alias structure will play an important role which we will examine and interpret in detail. We will also get to know and discuss the useful so-called Pareto diagram of standardized effects, in order to be able to efficiently distinguish graphically significant terms from non-significant terms. We will learn, that the so-called residual scatter which cannot be described with our DOE model should follow the laws of normal distribution. For this purpose, we will work with the probability plot of the normal distribution and use all the important representations in the context of residual analysis, such as residuals versus fits, histogram of residuals, and residuals versus order. With this information regarding the quality of our DOE model, we can move on to the fourth and final part of this Minitab tutorial in order to start the Response optimization with the final DOE model. For this purpose, we will use the very helpful interactive response optimization window to set the three parameters in such a way that the undesirable embrittlement components in the skateboard axles are reduced to a minimum. As part of this response optimization, we will also understand for example, the difference between so-called individual and composite desirability. In particular, we will also understand how the corresponding confidence or prediction intervals are to be interpreted in the context of our Response optimization. So at the end of our multi-part Minitab tutorial session, we will be able to provide the management of Smartboard Company with a 95% reliable recommendation, on how the three influencing variables should be set in concrete terms so that the material brittleness in the skateboard axles after heat treatment, is as low as possible.

MAIN TOPICS MINITAB TUTORIAL 31, Part 1

- Overview of the most common experimental DOE design types
- Full-factorial, fractional-factorial and response surface design types
- Screening, mixture and Taguchi design types
- 2-Level factorial experimental design with standard generators
- Resolution levels of available factorial experimental designs
- Center points and replications
- Discrimination power analysis to determine the number of replications
- Standard order vs. run order
- Setting block variables

MAIN TOPICS MINITAB TUTORIAL 31, part 2

- Analysis of factorial experimental designs
- Test for normal distribution of the DOE response variable
- Analysis of the non-descriptive residual scatter using the 4 in1 residual plot
- Evaluation of the „Coded coefficients“ table
- Main effect plot and Interaction plot
- Dual interaction diagrams
- Construction of triple interaction diagrams
- Unstack response variables according to factor levels
- Working with the layout tool

MAIN TOPICS MINITAB TUTORIAL 31, Part 3

- DOE model coefficients R- sq, R- sq(adj) and R-sq (prog)
- Analysis of variance within the framework of the DOE
- Coded coefficients
- Regression equation in non-coded units
- Alias structures
- Pareto diagram of the standardized effects
- Probability plot of the normal distribution
- Residuals versus fit
- Histogram of the residuals
- Residuals versus order

MAIN TOPICS MINITAB TUTORIAL 31, part 4

- Response optimization
- Multiple Response Prediction Analysis
- Response variable goal: Minimize, target, maximize
- Working with interactive response optimization window
- Individual desirability
- composite desirability
- Confidence interval in the context of Response optimization
- Prediction interval in the context of Response optimization

**31 DOE FULL FACTORIAL, 3 PREDICTORS
**In this 31st Minitab tutorial, we are in the axle material development department of Smartboard Company. We will accompany the development team how they reduce the brittleness in the skateboard Axles to a minimum, with the help of the so-called DOE, statistical design of experiment. In principle the higher the brittleness of the skateboard Axles the greater the risk that even the slightest impact loads can cause the Axles to break. The possible influencing factors that could have a potential impact on the brittleness of the Axles were determined by the development team as part of an Ishikawa analysis. The parameters chromium content in the axles, annealing temperature during heat treatment and the type of heat treatment furnace were identified as potential influencing factors. In view of the fact, that the development team is under time pressure and must also keep the scope of experimental tests on the production line to a minimum, so-called statistical design of experiment should be used to determine the optimum settings for the three influencing variables mentioned with little experimental effort, with the aim of reducing axis embrittlement to a minimum. This statistical design of experiments, often abbreviated to DOE is a very important and useful tool in the Six Sigma methodology, which deals with the statistical planning, execution, analysis and interpretation of cause-and-effect relationships. In order to be able to deal with this complex subject area in the necessary depth, this training unit is segmented into four parts. In the first part of this training unit, the fundamentals and basic ideas of statistical experimental design are explained in order to provide a good understanding of the most common of experimental design types. In particular we will get to know the important terms, such as center points, replications, and block variables, and understand why a discriminatory power analysis is always recommended for determining the number of required experimental replications. Well equipped with this basic knowledge, we will then enter the second part of our training unit, and get to know how to set up and analyze the appropriate experimental design, in our case the so-called factorial experimental design. We will see here, that it is very important to also carry out the test for normal distribution according to Anderson Darling, and we will also be able to understand why the so-called table of coded coefficients, plays a very useful role in the context in the DOE method. Another central topic in this part will be the so-called main effect plots and interaction effect plots, which we will construct manually step by step for didactic reasons, in order to be able to interpret the factor plots displayed in the output window in detail, and on this occasion, we will also get to know the useful Layout Tool function. With the knowledge gained from the first and second parts of this Minitab tutorial, we will then be well equipped to focus on the quality of our DOE model in the third part, for example to be able to evaluate the quality of our variance model by using the corresponding coefficients of determination, such as R-squared, R-squared adjusted, and R-squared predicted. In this context, we will also look at the associated regression equation in uncoded units, which was generated as part of the variance analysis for our DOE model, and basically represents the foundation for the upcoming response optimization. In this context, the so-called alias structure will play an important role which we will examine and interpret in detail. We will also get to know and discuss the useful so-called Pareto diagram of standardized effects, in order to be able to efficiently distinguish graphically significant terms from non-significant terms. We will learn, that the so-called residual scatter which cannot be described with our DOE model should follow the laws of normal distribution. For this purpose, we will work with the probability plot of the normal distribution and use all the important representations in the context of residual analysis, such as residuals versus fits, histogram of residuals, and residuals versus order. With this information regarding the quality of our DOE model, we can move on to the fourth and final part of this Minitab tutorial in order to start the Response optimization with the final DOE model. For this purpose, we will use the very helpful interactive response optimization window to set the three parameters in such a way that the undesirable embrittlement components in the skateboard axles are reduced to a minimum. As part of this response optimization, we will also understand for example, the difference between so-called individual and composite desirability. In particular, we will also understand how the corresponding confidence or prediction intervals are to be interpreted in the context of our Response optimization. So at the end of our multi-part Minitab tutorial session, we will be able to provide the management of Smartboard Company with a 95% reliable recommendation, on how the three influencing variables should be set in concrete terms so that the material brittleness in the skateboard axles after heat treatment, is as low as possible.

MAIN TOPICS MINITAB TUTORIAL 31, Part 1

- Overview of the most common experimental DOE design types
- Full-factorial, fractional-factorial and response surface design types
- Screening, mixture and Taguchi design types
- 2-Level factorial experimental design with standard generators
- Resolution levels of available factorial experimental designs
- Center points and replications
- Discrimination power analysis to determine the number of replications
- Standard order vs. run order
- Setting block variables

MAIN TOPICS MINITAB TUTORIAL 31, part 2

- Analysis of factorial experimental designs
- Test for normal distribution of the DOE response variable
- Analysis of the non-descriptive residual scatter using the 4 in1 residual plot
- Evaluation of the „Coded coefficients“ table
- Main effect plot and Interaction plot
- Dual interaction diagrams
- Construction of triple interaction diagrams
- Unstack response variables according to factor levels
- Working with the layout tool

MAIN TOPICS MINITAB TUTORIAL 31, Part 3

- DOE model coefficients R- sq, R- sq(adj) and R-sq (prog)
- Analysis of variance within the framework of the DOE
- Coded coefficients
- Regression equation in non-coded units
- Alias structures
- Pareto diagram of the standardized effects
- Probability plot of the normal distribution
- Residuals versus fit
- Histogram of the residuals
- Residuals versus order

MAIN TOPICS MINITAB TUTORIAL 31, part 4

- Response optimization
- Multiple Response Prediction Analysis
- Response variable goal: Minimize, target, maximize
- Working with interactive response optimization window
- Individual desirability
- composite desirability
- Confidence interval in the context of Response optimization
- Prediction interval in the context of Response optimization

**32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS
**In this 32nd Minitab tutorial, we are in the wind tunnel laboratory of Smartboard Company. The aerodynamic properties of newly developed high-speed racing suits are currently being tested here. These racing suits are designed to minimize air resistance on the skateboard pilots in high-speed championships, in order to achieve maximum speeds on the race track. To determine the air resistance, the respective skateboard pilot in the racing suit is exposed to a defined air flow in the wind tunnel and the drag coefficient, the so-called cd-value is measured as a measure of the aerodynamic behavior of the racing suit. The lower the cd value, the lower the aerodynamic drag, which in turn has a positive effect on the maximum achievable speed. The parameters to be varied on the racing suit which have a potential influence on the aerodynamics of the racing suit were determined as part of an Ishikawa analysis. The parameters surface roughness, seam width and material thickness on the racing suit were identified as potential influencing factors. The aim of this Minitab tutorial will be, to use statistical test planning to determine an optimum combination of surface roughness, seam width and material thickness so that the drag of the racing suit can be reduced to a minimum. We will see how the development team sets up and implements a full factorial design plan with so-called center points and blocks. In the first part of this Minitab tutorial, we will focus on determining the required number of replications and the discrimination, as well the first and second type of error. In this context, we will learn how to use a discriminatory power analysis, to determine the appropriate number of replications for our experimental design, and how in this context the relationship between the discriminatory power quality, and the first and second type of error can be easily understood in the context of hypothesis testing.

The main topics in the second part of this Minitab tutorial then concentrate on drawing up the actual DOE test design. Here we will learn how to set up the full factorial experimental design step by step. In this context, we will also understand what center points are, and why the setting of so-called block variables can play an important role depending on the task. On this occasion we will get to know the useful function random generator, for randomizing data sets which can also be very helpful for other tasks in general, in order to randomize data. Furthermore, we will learn to work with so-called interval plot, in the context of DOE and experience that these interval plots are always very useful to get a visual impression of the trends and tendencies from the experimental runs. The focus in the third part of this Minitab tutorial will then be to analyze our DOE model in terms, of its quality and capability, with the question of how well this DOE model can actually represent the technical cause-effect relationships realistically. To this end, we will discuss and use the coefficients of determinations, R-squared, R-squared adjusted and R-squared predicted, in order to assess the quality of our DOE model. At this point, the table of coded coefficients becomes important again. And we will use the previously set center points, to check the linearity of our DOE model. With the help of our block variables, we will also be able to analyze whether there are significant differences in the blocked test runs. We will then learn how to use main effect and interaction plots to identify the corresponding cause-effect relationships. We will also learn how to perform a hierarchical reduction of the variance model based on the Pareto diagram of the standardized effects, in order to optimize the predictive quality of our DOE model. For this optimization of our DOE model we will get to know and use the method of manual backward elimination for a better understanding.

As part of the residual analysis, we will evaluate the corresponding residual diagrams to check, whether our residuals also follow the laws of normal distribution. We will use the probability plot of the residuals, as well as the diagrams residuals versus fit, residuals versus order and the residuals histogram, to check whether the residual scatter that cannot be described with our model, shows undesirable trends or tendencies, that could possibly falsify our results in the target value optimization. With the knowledge we have learned up to this point, we will then move on to the final part of this Minitab tutorial and start with Response optimization. Here we will use the very useful interactive response optimization window to set the influencing variables, so that the required target value of the response variable can be achieved. We will get to know the important graphics, contour plot, cube plot, and surface plot and see, that these forms of representation are particularly suitable in day-to-day business for defining the respective working areas for parameter settings so that, for example, the desired target value can still be achieved even in the event of undesirable, unexpected process variations. At the end of this multi-part Minitab tutorial, we will be able to make concrete recommendations for action based on the corresponding confidence and prediction intervals, to the technical management of Smartboard Company as to how the influencing factors or the working ranges for the influencing factors should be set, so that the required target value in our response variable can be achieved with a 95% probability.

MAIN TOPICS MINITAB TUTORIAL 32, part 1

- Determination of the required number of replications
- Discrimination power analysis
- Discrimination quality vs. 1st type error, 2nd type error

MAIN TOPICS MINITAB TUTORIAL 32, part 2

- Set up DOE design plan
- Randomization using the random generator
- Interval plot for visualizing cause-effect relationships
- Setting center points and block variables

MAIN TOPICS MINITAB TUTORIAL 32, Part 3

- Analyze factorial experimental design
- Evaluation of the table Coded coefficients
- Evaluation of linearity using the center points
- Evaluation of the block variables
- Evaluation of the main effect diagrams
- Evaluation of the interaction diagrams
- Assessment of model quality using the coefficients R- sq, R- sq (adj) and R-sq (pred)
- Optimization of the DOE model
- Evaluation of the table Variance analysis
- Pareto diagram of the standardized effects
- Hierarchical backward elimination
- Manual backward elimination
- Residual analysis
- Residual probability plot
- Residuals vs. adjustments
- Residuals vs. fits
- Residual histogram

MAIN TOPICS MINITAB TUTORIAL 32, part 4

- Response optimization
- Individual and composite desirability level
- Working with the interactive response optimization window
- DOE-Contour diagram Interpretation
- DOE-Cube diagram Interpretation
- DOE-Surface plot Interpretation

**32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS
**In this 32nd Minitab tutorial, we are in the wind tunnel laboratory of Smartboard Company. The aerodynamic properties of newly developed high-speed racing suits are currently being tested here. These racing suits are designed to minimize air resistance on the skateboard pilots in high-speed championships, in order to achieve maximum speeds on the race track. To determine the air resistance, the respective skateboard pilot in the racing suit is exposed to a defined air flow in the wind tunnel and the drag coefficient, the so-called cd-value is measured as a measure of the aerodynamic behavior of the racing suit. The lower the cd value, the lower the aerodynamic drag, which in turn has a positive effect on the maximum achievable speed. The parameters to be varied on the racing suit which have a potential influence on the aerodynamics of the racing suit were determined as part of an Ishikawa analysis. The parameters surface roughness, seam width and material thickness on the racing suit were identified as potential influencing factors. The aim of this Minitab tutorial will be, to use statistical test planning to determine an optimum combination of surface roughness, seam width and material thickness so that the drag of the racing suit can be reduced to a minimum. We will see how the development team sets up and implements a full factorial design plan with so-called center points and blocks. In the first part of this Minitab tutorial, we will focus on determining the required number of replications and the discrimination, as well the first and second type of error. In this context, we will learn how to use a discriminatory power analysis, to determine the appropriate number of replications for our experimental design, and how in this context the relationship between the discriminatory power quality, and the first and second type of error can be easily understood in the context of hypothesis testing.

The main topics in the second part of this Minitab tutorial then concentrate on drawing up the actual DOE test design. Here we will learn how to set up the full factorial experimental design step by step. In this context, we will also understand what center points are, and why the setting of so-called block variables can play an important role depending on the task. On this occasion we will get to know the useful function random generator, for randomizing data sets which can also be very helpful for other tasks in general, in order to randomize data. Furthermore, we will learn to work with so-called interval plot, in the context of DOE and experience that these interval plots are always very useful to get a visual impression of the trends and tendencies from the experimental runs. The focus in the third part of this Minitab tutorial will then be to analyze our DOE model in terms, of its quality and capability, with the question of how well this DOE model can actually represent the technical cause-effect relationships realistically. To this end, we will discuss and use the coefficients of determinations, R-squared, R-squared adjusted and R-squared predicted, in order to assess the quality of our DOE model. At this point, the table of coded coefficients becomes important again. And we will use the previously set center points, to check the linearity of our DOE model. With the help of our block variables, we will also be able to analyze whether there are significant differences in the blocked test runs. We will then learn how to use main effect and interaction plots to identify the corresponding cause-effect relationships. We will also learn how to perform a hierarchical reduction of the variance model based on the Pareto diagram of the standardized effects, in order to optimize the predictive quality of our DOE model. For this optimization of our DOE model we will get to know and use the method of manual backward elimination for a better understanding.

As part of the residual analysis, we will evaluate the corresponding residual diagrams to check, whether our residuals also follow the laws of normal distribution. We will use the probability plot of the residuals, as well as the diagrams residuals versus fit, residuals versus order and the residuals histogram, to check whether the residual scatter that cannot be described with our model, shows undesirable trends or tendencies, that could possibly falsify our results in the target value optimization. With the knowledge we have learned up to this point, we will then move on to the final part of this Minitab tutorial and start with Response optimization. Here we will use the very useful interactive response optimization window to set the influencing variables, so that the required target value of the response variable can be achieved. We will get to know the important graphics, contour plot, cube plot, and surface plot and see, that these forms of representation are particularly suitable in day-to-day business for defining the respective working areas for parameter settings so that, for example, the desired target value can still be achieved even in the event of undesirable, unexpected process variations. At the end of this multi-part Minitab tutorial, we will be able to make concrete recommendations for action based on the corresponding confidence and prediction intervals, to the technical management of Smartboard Company as to how the influencing factors or the working ranges for the influencing factors should be set, so that the required target value in our response variable can be achieved with a 95% probability.

MAIN TOPICS MINITAB TUTORIAL 32, part 1

- Determination of the required number of replications
- Discrimination power analysis
- Discrimination quality vs. 1st type error, 2nd type error

MAIN TOPICS MINITAB TUTORIAL 32, part 2

- Set up DOE design plan
- Randomization using the random generator
- Interval plot for visualizing cause-effect relationships
- Setting center points and block variables

MAIN TOPICS MINITAB TUTORIAL 32, Part 3

- Analyze factorial experimental design
- Evaluation of the table Coded coefficients
- Evaluation of linearity using the center points
- Evaluation of the block variables
- Evaluation of the main effect diagrams
- Evaluation of the interaction diagrams
- Assessment of model quality using the coefficients R- sq, R- sq (adj) and R-sq (pred)
- Optimization of the DOE model
- Evaluation of the table Variance analysis
- Pareto diagram of the standardized effects
- Hierarchical backward elimination
- Manual backward elimination
- Residual analysis
- Residual probability plot
- Residuals vs. adjustments
- Residuals vs. fits
- Residual histogram

MAIN TOPICS MINITAB TUTORIAL 32, part 4

- Response optimization
- Individual and composite desirability level
- Working with the interactive response optimization window
- DOE-Contour diagram Interpretation
- DOE-Cube diagram Interpretation
- DOE-Surface plot Interpretation

**32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS
**In this 32nd Minitab tutorial, we are in the wind tunnel laboratory of Smartboard Company. The aerodynamic properties of newly developed high-speed racing suits are currently being tested here. These racing suits are designed to minimize air resistance on the skateboard pilots in high-speed championships, in order to achieve maximum speeds on the race track. To determine the air resistance, the respective skateboard pilot in the racing suit is exposed to a defined air flow in the wind tunnel and the drag coefficient, the so-called cd-value is measured as a measure of the aerodynamic behavior of the racing suit. The lower the cd value, the lower the aerodynamic drag, which in turn has a positive effect on the maximum achievable speed. The parameters to be varied on the racing suit which have a potential influence on the aerodynamics of the racing suit were determined as part of an Ishikawa analysis. The parameters surface roughness, seam width and material thickness on the racing suit were identified as potential influencing factors. The aim of this Minitab tutorial will be, to use statistical test planning to determine an optimum combination of surface roughness, seam width and material thickness so that the drag of the racing suit can be reduced to a minimum. We will see how the development team sets up and implements a full factorial design plan with so-called center points and blocks. In the first part of this Minitab tutorial, we will focus on determining the required number of replications and the discrimination, as well the first and second type of error. In this context, we will learn how to use a discriminatory power analysis, to determine the appropriate number of replications for our experimental design, and how in this context the relationship between the discriminatory power quality, and the first and second type of error can be easily understood in the context of hypothesis testing.

The main topics in the second part of this Minitab tutorial then concentrate on drawing up the actual DOE test design. Here we will learn how to set up the full factorial experimental design step by step. In this context, we will also understand what center points are, and why the setting of so-called block variables can play an important role depending on the task. On this occasion we will get to know the useful function random generator, for randomizing data sets which can also be very helpful for other tasks in general, in order to randomize data. Furthermore, we will learn to work with so-called interval plot, in the context of DOE and experience that these interval plots are always very useful to get a visual impression of the trends and tendencies from the experimental runs. The focus in the third part of this Minitab tutorial will then be to analyze our DOE model in terms, of its quality and capability, with the question of how well this DOE model can actually represent the technical cause-effect relationships realistically. To this end, we will discuss and use the coefficients of determinations, R-squared, R-squared adjusted and R-squared predicted, in order to assess the quality of our DOE model. At this point, the table of coded coefficients becomes important again. And we will use the previously set center points, to check the linearity of our DOE model. With the help of our block variables, we will also be able to analyze whether there are significant differences in the blocked test runs. We will then learn how to use main effect and interaction plots to identify the corresponding cause-effect relationships. We will also learn how to perform a hierarchical reduction of the variance model based on the Pareto diagram of the standardized effects, in order to optimize the predictive quality of our DOE model. For this optimization of our DOE model we will get to know and use the method of manual backward elimination for a better understanding.

As part of the residual analysis, we will evaluate the corresponding residual diagrams to check, whether our residuals also follow the laws of normal distribution. We will use the probability plot of the residuals, as well as the diagrams residuals versus fit, residuals versus order and the residuals histogram, to check whether the residual scatter that cannot be described with our model, shows undesirable trends or tendencies, that could possibly falsify our results in the target value optimization. With the knowledge we have learned up to this point, we will then move on to the final part of this Minitab tutorial and start with Response optimization. Here we will use the very useful interactive response optimization window to set the influencing variables, so that the required target value of the response variable can be achieved. We will get to know the important graphics, contour plot, cube plot, and surface plot and see, that these forms of representation are particularly suitable in day-to-day business for defining the respective working areas for parameter settings so that, for example, the desired target value can still be achieved even in the event of undesirable, unexpected process variations. At the end of this multi-part Minitab tutorial, we will be able to make concrete recommendations for action based on the corresponding confidence and prediction intervals, to the technical management of Smartboard Company as to how the influencing factors or the working ranges for the influencing factors should be set, so that the required target value in our response variable can be achieved with a 95% probability.

MAIN TOPICS MINITAB TUTORIAL 32, part 1

- Determination of the required number of replications
- Discrimination power analysis
- Discrimination quality vs. 1st type error, 2nd type error

MAIN TOPICS MINITAB TUTORIAL 32, part 2

- Set up DOE design plan
- Randomization using the random generator
- Interval plot for visualizing cause-effect relationships
- Setting center points and block variables

MAIN TOPICS MINITAB TUTORIAL 32, Part 3

- Analyze factorial experimental design
- Evaluation of the table Coded coefficients
- Evaluation of linearity using the center points
- Evaluation of the block variables
- Evaluation of the main effect diagrams
- Evaluation of the interaction diagrams
- Assessment of model quality using the coefficients R- sq, R- sq (adj) and R-sq (pred)
- Optimization of the DOE model
- Evaluation of the table Variance analysis
- Pareto diagram of the standardized effects
- Hierarchical backward elimination
- Manual backward elimination
- Residual analysis
- Residual probability plot
- Residuals vs. adjustments
- Residuals vs. fits
- Residual histogram

MAIN TOPICS MINITAB TUTORIAL 32, part 4

- Response optimization
- Individual and composite desirability level
- Working with the interactive response optimization window
- DOE-Contour diagram Interpretation
- DOE-Cube diagram Interpretation
- DOE-Surface plot Interpretation

**32 DOE FULL FACTORIAL, CENTER POINTS, BLOCKS
**In this 32nd Minitab tutorial, we are in the wind tunnel laboratory of Smartboard Company. The aerodynamic properties of newly developed high-speed racing suits are currently being tested here. These racing suits are designed to minimize air resistance on the skateboard pilots in high-speed championships, in order to achieve maximum speeds on the race track. To determine the air resistance, the respective skateboard pilot in the racing suit is exposed to a defined air flow in the wind tunnel and the drag coefficient, the so-called cd-value is measured as a measure of the aerodynamic behavior of the racing suit. The lower the cd value, the lower the aerodynamic drag, which in turn has a positive effect on the maximum achievable speed. The parameters to be varied on the racing suit which have a potential influence on the aerodynamics of the racing suit were determined as part of an Ishikawa analysis. The parameters surface roughness, seam width and material thickness on the racing suit were identified as potential influencing factors. The aim of this Minitab tutorial will be, to use statistical test planning to determine an optimum combination of surface roughness, seam width and material thickness so that the drag of the racing suit can be reduced to a minimum. We will see how the development team sets up and implements a full factorial design plan with so-called center points and blocks. In the first part of this Minitab tutorial, we will focus on determining the required number of replications and the discrimination, as well the first and second type of error. In this context, we will learn how to use a discriminatory power analysis, to determine the appropriate number of replications for our experimental design, and how in this context the relationship between the discriminatory power quality, and the first and second type of error can be easily understood in the context of hypothesis testing.

MAIN TOPICS MINITAB TUTORIAL 32, part 1

- Determination of the required number of replications
- Discrimination power analysis
- Discrimination quality vs. 1st type error, 2nd type error

MAIN TOPICS MINITAB TUTORIAL 32, part 2

- Set up DOE design plan
- Randomization using the random generator
- Interval plot for visualizing cause-effect relationships
- Setting center points and block variables

MAIN TOPICS MINITAB TUTORIAL 32, Part 3

- Analyze factorial experimental design
- Evaluation of the table Coded coefficients
- Evaluation of linearity using the center points
- Evaluation of the block variables
- Evaluation of the main effect diagrams
- Evaluation of the interaction diagrams
- Assessment of model quality using the coefficients R- sq, R- sq (adj) and R-sq (pred)
- Optimization of the DOE model
- Evaluation of the table Variance analysis
- Pareto diagram of the standardized effects
- Hierarchical backward elimination
- Manual backward elimination
- Residual analysis
- Residual probability plot
- Residuals vs. adjustments
- Residuals vs. fits
- Residual histogram

MAIN TOPICS MINITAB TUTORIAL 32, part 4

- Response optimization
- Individual and composite desirability level
- Working with the interactive response optimization window
- DOE-Contour diagram Interpretation
- DOE-Cube diagram Interpretation
- DOE-Surface plot Interpretation

**33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS
**In the 33rd Minitab tutorial, we are at the test department for skateboard wheels. Here we will accompany the materials testing team as they optimize the abrasion behavior of a newly developed material for skateboard wheels, made of Kevlar fiber-reinforced plastic. Kevlar is used in the industry as a material for bulletproof vests and cut-resistant gloves, and the aim is to test the extent to which Kevlar components in the wheel material could reduce the abrasion of the skateboard wheels. Smartboard Company has a specially designed test station for this purpose, in which the skateboard wheel to be tested is fixed to an axle, driven by an electric motor. The skateboard wheel is then rolled on a counter body at a defined speed and a defined contact pressure. The surface properties of the counter body correspond to the properties of a typical road surface. At the end of the test period the material abrasion of the skateboard wheels, is determined in grams by calculating the difference between the wheels, weight before and after the wear test. The amount of abrasion in grams is therefore our response variable, which should ideally be as low as possible. This means that the lower the abrasion, the higher the wear resistance of the skateboard wheel, and the higher the customer satisfaction. However, as the research project is under great time pressure, the DOE team decides to use a so-called. fractional factorial statistical design under the given boundary conditions and the available technical expertise.

In order to understand the subject area of fractional experimental designs in the necessary depth, this training unit is divided into four parts. In the first part in our Minitab tutorial unit we will look at the fundamentals of fractional experimental design types, and learn what distinguishes a fractional factorial experimental design, from a full factorial experimental design, and how to set up a fractional factorial experimental design properly. We will learn that these fractional factorial experimental designs inevitably always have to accept certain mixing structures of influencing factors, also known as alias structures. We will therefore also learn why the DOE team’s high level of technical expertise with regard to the potential cause-effect relationships, in the context of fractional experimental design, plays a decisive role in drawing up a usable fractional experimental design, that is actually capable of theoretically and mathematically modeling the real cause-effect relationships. Well equipped with this knowledge, we will then be able to properly set up, and analyze a fractional experimental design in the second part in our Minitab tutorial. For example, by evaluating the table of coded coefficients, and working with the Pareto diagram of standardized effects. We will learn how to interpret the so-called PSE Lenth parameter, in the context of the Pareto chart of standardized effects. We will optimize our DOE model by performing a hierarchical backward elimination of non-significant terms, by removing non-significant terms from our model by hierarchical backward elimination based on the corresponding coefficients, model quality parameters, p-values and the Pareto diagram of standardized effects.

In this context, we will also return to the present so-called, alias structure, in the fractional experimental design. Which shows us which mixing structures we have accepted due to the fractional factorial experimental design, as the price for keeping the number of experimental trials as low as possible, due to the high number of influencing factors. After backward elimination, we can then assess the final model quality of our optimized DOE model, by using the corresponding coefficients of determination such as R-squared adjusted, and R-squared predicted. Finally, at the end of the second part, we will evaluate the required corresponding analysis of the non-descriptive residual scatter. In the third and final part of our Minitab tutorial, we will then use the final optimized DOE model to enter the response optimization phase in order to determine the optimum parameter settings, to reduce abrasion on the skateboard wheels to a minimum. In this context, we will also look at the corresponding interaction diagrams. Once we have determined the optimum parameter settings by using the response optimization, we will then define specific working ranges for the parameter settings. For this purpose, we will get to know the useful display forms of contour plot, and cube plot actively create, and interpret them in order to define the permissible tolerance ranges for our parameter settings, so that, for example, the required target value in our response variable is still achieved even with unexpected process instabilities.

MAIN TOPICS MINITAB TUTORIAL 33, Part 1

- Introduction to fractional design of experiments
- Establishment of a fractional design plan
- Overview of possible design plan types
- Interpretation of the alias structures in the fractional experimental design

MAIN TOPICS MINITAB TUTORIAL 33, part 2

- Analysis of a ¼-fractional design plan
- Table of coded coefficients
- Pareto diagram of the standardized effects
- The PSE Lenth parameter
- Hierarchical backward elimination of non-significant terms
- Interpretation of the main effect plots
- Coefficients of determination of the DOE model
- Evaluation of the alias structures
- Residual analysis

MAIN TOPICS MINITAB TUTORIAL 33, Part 3

- Interpretation of the interaction plots
- Response optimization
- Contour diagram for defining optimum parameter settings
- Cube plot for data means, based on the DOE regression equation
- Cube plot for fitted means, based on the experimentally determined data

**33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS
**In the 33rd Minitab tutorial, we are at the test department for skateboard wheels. Here we will accompany the materials testing team as they optimize the abrasion behavior of a newly developed material for skateboard wheels, made of Kevlar fiber-reinforced plastic. Kevlar is used in the industry as a material for bulletproof vests and cut-resistant gloves, and the aim is to test the extent to which Kevlar components in the wheel material could reduce the abrasion of the skateboard wheels. Smartboard Company has a specially designed test station for this purpose, in which the skateboard wheel to be tested is fixed to an axle, driven by an electric motor. The skateboard wheel is then rolled on a counter body at a defined speed and a defined contact pressure. The surface properties of the counter body correspond to the properties of a typical road surface. At the end of the test period the material abrasion of the skateboard wheels, is determined in grams by calculating the difference between the wheels, weight before and after the wear test. The amount of abrasion in grams is therefore our response variable, which should ideally be as low as possible. This means that the lower the abrasion, the higher the wear resistance of the skateboard wheel, and the higher the customer satisfaction. However, as the research project is under great time pressure, the DOE team decides to use a so-called. fractional factorial statistical design under the given boundary conditions and the available technical expertise.

In order to understand the subject area of fractional experimental designs in the necessary depth, this training unit is divided into four parts. In the first part in our Minitab tutorial unit we will look at the fundamentals of fractional experimental design types, and learn what distinguishes a fractional factorial experimental design, from a full factorial experimental design, and how to set up a fractional factorial experimental design properly. We will learn that these fractional factorial experimental designs inevitably always have to accept certain mixing structures of influencing factors, also known as alias structures. We will therefore also learn why the DOE team’s high level of technical expertise with regard to the potential cause-effect relationships, in the context of fractional experimental design, plays a decisive role in drawing up a usable fractional experimental design, that is actually capable of theoretically and mathematically modeling the real cause-effect relationships. Well equipped with this knowledge, we will then be able to properly set up, and analyze a fractional experimental design in the second part in our Minitab tutorial. For example, by evaluating the table of coded coefficients, and working with the Pareto diagram of standardized effects. We will learn how to interpret the so-called PSE Lenth parameter, in the context of the Pareto chart of standardized effects. We will optimize our DOE model by performing a hierarchical backward elimination of non-significant terms, by removing non-significant terms from our model by hierarchical backward elimination based on the corresponding coefficients, model quality parameters, p-values and the Pareto diagram of standardized effects.

In this context, we will also return to the present so-called, alias structure, in the fractional experimental design. Which shows us which mixing structures we have accepted due to the fractional factorial experimental design, as the price for keeping the number of experimental trials as low as possible, due to the high number of influencing factors. After backward elimination, we can then assess the final model quality of our optimized DOE model, by using the corresponding coefficients of determination such as R-squared adjusted, and R-squared predicted. Finally, at the end of the second part, we will evaluate the required corresponding analysis of the non-descriptive residual scatter. In the third and final part of our Minitab tutorial, we will then use the final optimized DOE model to enter the response optimization phase in order to determine the optimum parameter settings, to reduce abrasion on the skateboard wheels to a minimum. In this context, we will also look at the corresponding interaction diagrams. Once we have determined the optimum parameter settings by using the response optimization, we will then define specific working ranges for the parameter settings. For this purpose, we will get to know the useful display forms of contour plot, and cube plot actively create, and interpret them in order to define the permissible tolerance ranges for our parameter settings, so that, for example, the required target value in our response variable is still achieved even with unexpected process instabilities.

MAIN TOPICS MINITAB TUTORIAL 33, Part 1

- Introduction to fractional design of experiments
- Establishment of a fractional design plan
- Overview of possible design plan types
- Interpretation of the alias structures in the fractional experimental design

MAIN TOPICS MINITAB TUTORIAL 33, part 2

- Analysis of a ¼-fractional design plan
- Table of coded coefficients
- Pareto diagram of the standardized effects
- The PSE Lenth parameter
- Hierarchical backward elimination of non-significant terms
- Interpretation of the main effect plots
- Coefficients of determination of the DOE model
- Evaluation of the alias structures
- Residual analysis

MAIN TOPICS MINITAB TUTORIAL 33, Part 3

- Interpretation of the interaction plots
- Response optimization
- Contour diagram for defining optimum parameter settings
- Cube plot for data means, based on the DOE regression equation
- Cube plot for fitted means, based on the experimentally determined data

**33 DOE FRACTIONAL FACTORIAL, 6 PREDICTORS
**In the 33rd Minitab tutorial, we are at the test department for skateboard wheels. Here we will accompany the materials testing team as they optimize the abrasion behavior of a newly developed material for skateboard wheels, made of Kevlar fiber-reinforced plastic. Kevlar is used in the industry as a material for bulletproof vests and cut-resistant gloves, and the aim is to test the extent to which Kevlar components in the wheel material could reduce the abrasion of the skateboard wheels. Smartboard Company has a specially designed test station for this purpose, in which the skateboard wheel to be tested is fixed to an axle, driven by an electric motor. The skateboard wheel is then rolled on a counter body at a defined speed and a defined contact pressure. The surface properties of the counter body correspond to the properties of a typical road surface. At the end of the test period the material abrasion of the skateboard wheels, is determined in grams by calculating the difference between the wheels, weight before and after the wear test. The amount of abrasion in grams is therefore our response variable, which should ideally be as low as possible. This means that the lower the abrasion, the higher the wear resistance of the skateboard wheel, and the higher the customer satisfaction. However, as the research project is under great time pressure, the DOE team decides to use a so-called. fractional factorial statistical design under the given boundary conditions and the available technical expertise.

In order to understand the subject area of fractional experimental designs in the necessary depth, this training unit is divided into four parts. In the first part in our Minitab tutorial unit we will look at the fundamentals of fractional experimental design types, and learn what distinguishes a fractional factorial experimental design, from a full factorial experimental design, and how to set up a fractional factorial experimental design properly. We will learn that these fractional factorial experimental designs inevitably always have to accept certain mixing structures of influencing factors, also known as alias structures. We will therefore also learn why the DOE team’s high level of technical expertise with regard to the potential cause-effect relationships, in the context of fractional experimental design, plays a decisive role in drawing up a usable fractional experimental design, that is actually capable of theoretically and mathematically modeling the real cause-effect relationships. Well equipped with this knowledge, we will then be able to properly set up, and analyze a fractional experimental design in the second part in our Minitab tutorial. For example, by evaluating the table of coded coefficients, and working with the Pareto diagram of standardized effects. We will learn how to interpret the so-called PSE Lenth parameter, in the context of the Pareto chart of standardized effects. We will optimize our DOE model by performing a hierarchical backward elimination of non-significant terms, by removing non-significant terms from our model by hierarchical backward elimination based on the corresponding coefficients, model quality parameters, p-values and the Pareto diagram of standardized effects.

In this context, we will also return to the present so-called, alias structure, in the fractional experimental design. Which shows us which mixing structures we have accepted due to the fractional factorial experimental design, as the price for keeping the number of experimental trials as low as possible, due to the high number of influencing factors. After backward elimination, we can then assess the final model quality of our optimized DOE model, by using the corresponding coefficients of determination such as R-squared adjusted, and R-squared predicted. Finally, at the end of the second part, we will evaluate the required corresponding analysis of the non-descriptive residual scatter. In the third and final part of our Minitab tutorial, we will then use the final optimized DOE model to enter the response optimization phase in order to determine the optimum parameter settings, to reduce abrasion on the skateboard wheels to a minimum. In this context, we will also look at the corresponding interaction diagrams. Once we have determined the optimum parameter settings by using the response optimization, we will then define specific working ranges for the parameter settings. For this purpose, we will get to know the useful display forms of contour plot, and cube plot actively create, and interpret them in order to define the permissible tolerance ranges for our parameter settings, so that, for example, the required target value in our response variable is still achieved even with unexpected process instabilities.

MAIN TOPICS MINITAB TUTORIAL 33, Part 1

- Introduction to fractional design of experiments
- Establishment of a fractional design plan
- Overview of possible design plan types
- Interpretation of the alias structures in the fractional experimental design

MAIN TOPICS MINITAB TUTORIAL 33, part 2

- Analysis of a ¼-fractional design plan
- Table of coded coefficients
- Pareto diagram of the standardized effects
- The PSE Lenth parameter
- Hierarchical backward elimination of non-significant terms
- Interpretation of the main effect plots
- Coefficients of determination of the DOE model
- Evaluation of the alias structures
- Residual analysis

MAIN TOPICS MINITAB TUTORIAL 33, Part 3

- Interpretation of the interaction plots
- Response optimization
- Contour diagram for defining optimum parameter settings
- Cube plot for data means, based on the DOE regression equation
- Cube plot for fitted means, based on the experimentally determined data

**34 DOE: RESPONSE SURFACE DESIGN
**In this 34th Minitab tutorial, we are visiting the painting and coating department of smartboard company. Here the skateboard decks are coated with liquid paint, according to customer requirements in an automated painting process. The painting process must be designed in such a way, that the applied paint layer has a minimum adhesive force to withstand external stresses, such as impact and shock loads. The adhesive force of the paint layer on the skateboard decks achieved during the painting process, is determined in the laboratory by using a standardized scratch test. In this scratch test, a diamond pin is pressed vertically into the paint layer with a constantly increasing test force and simultaneously moved horizontally over the paint layer. The maximum test force achieved in the scratch test which leads to the first flaking of the paint layer, and the existing crack characteristics which are evaluated microscopically, determine the adhesive force of the paint layer. The automated painting process for the skateboard decks can be divided into three main process steps: priming, painting, and drying. In the first process step priming, the skateboard decks are subjected to continuous immersion priming in a dip tank, in order to reduce irregularities and open pores on the skateboard deck surface. In preliminary tests the team identified the layer thickness of the primer as the main influencing factor in this process step. In the second process step, the final layer of paint is applied by continuously passing the skateboard decks through a paint booth, and coating them using robot-controlled spray nozzles. In preliminary tests the team identified the spray nozzle distance to the skateboard deck surface as the main influencing factor in this process step. However, these preliminary tests also showed that this distance should be as high as possible, as the gloss level of the paint layer improves, the greater the distance between the nozzles. In the subsequent third process step drying, the painted skateboard decks are passed through a continuously operating multi-zone dryer to remove the existing water content in the paint layer. In preliminary tests, the team identified the average drying temperature as the main influencing factor in this process step.

Recently however skateboard decks have often had to be scrapped, because an above-average number of paint layers were unable to achieve the required minimum adhesion strength. For this reason a quality team was formed to use the DOE Methodology, to identify the parameter effects and interaction effects, that have a significant effect on the adhesive force. The core topic in this Minitab tutorial will therefore be, to mathematically model and analyze the influence of these three parameters on the adhesive force by using a suitable statistical design of experiments, and to optimally adjust them using response value optimization, so that the required minimum adhesive force is achieved. Since the team had already found out during full-factorial preliminary tests, that non-linear cause-effect relationships exist, it has been decided to work with the so-called response surface design. The central topic in this Minitab tutorial will therefore be the response surface design. This will then also be applied in order to be able to describe non-linear cause-effect relationships. To this end, we will first learn what distinguishes a response surface design from a factorial or a differently fractionated experimental design. The important so-called star points, will also play a central role in this context, and we will get to know the difference between star points and center points, and experience that the so-called star point distance alpha, plays a very important role. We will learn what a central composite design is, and what a Box-Behnken design is. and by constructing 3-D scatter plot, we will discuss the corresponding mathematical requirements with regard to orthogonality, which a central composite design effective should ideally possess.

With the help of the so-called 3-D scatter plot, it will then also be very easy for us to understand the difference between star points, and center points. We will learn how to use the corresponding significance values from the hypothesis tests, to assess whether the corresponding terms are significant, or non-significant. And we will learn how to assess the corresponding effect sizes using the Pareto chart of standardized effects. We will then use the useful corresponding factor diagrams, to discuss the corresponding effect sizes, and the directions of the respective effects. With this knowledge we will then be able to determine the optimal parameter settings, as part of the final response optimization. In this context, we will also learn how to create a so-called contour plot, in order to define process-safe working ranges for the parameter settings, so that the required minimum adhesive force of the coating layer is still achieved, even in the event of unplanned process fluctuations. We will also create and discuss the useful graphic effect surface plot, to get a very good three-dimensional visual impression of the trend of our response variable depending on the influencing variables. At the end of this Minitab tutorial, we will be able to use all the available analysis results and in particular the corresponding confidence and prediction intervals, to make concrete recommendations to the technical management of Smartboard Company, on what the optimum parameter settings should be in order to achieve the required minimum adhesive force of the paint layer on the skateboard decks, even with these existing non-linear cause-and-effect relationships.

MAIN TOPICS MINITAB TUTORIAL 34, Part 1

- Response surface design, fundamentals
- Central composed experimental design
- Box- Behnken design
- 1/2 – central composed Response surface design
- 1/4 – centrally composed Response surface design
- 1/8 – centrally composed Response surface design
- Create a Response surface design plan
- Point types: cube points, center points, star points
- Identification of different point types in the 3D scatter diagram

MAIN TOPICS MINITAB TUTORIAL 34, part 2

- Analyze Response surface design
- Coded coefficients
- Main effect plots
- Interaction plots
- Coefficients of determination of the DOE model
- Interpretation of the analysis of variance
- Regression equation in non-coded units
- Pareto diagram of the standardized effects
- Residual analysis
- Response optimization
- Multiple Response Prediction
- Define process areas
- Contour plot
- Surface plot

**34 DOE: RESPONSE SURFACE DESIGN
**In this 34th Minitab tutorial, we are visiting the painting and coating department of smartboard company. Here the skateboard decks are coated with liquid paint, according to customer requirements in an automated painting process. The painting process must be designed in such a way, that the applied paint layer has a minimum adhesive force to withstand external stresses, such as impact and shock loads. The adhesive force of the paint layer on the skateboard decks achieved during the painting process, is determined in the laboratory by using a standardized scratch test. In this scratch test, a diamond pin is pressed vertically into the paint layer with a constantly increasing test force and simultaneously moved horizontally over the paint layer. The maximum test force achieved in the scratch test which leads to the first flaking of the paint layer, and the existing crack characteristics which are evaluated microscopically, determine the adhesive force of the paint layer. The automated painting process for the skateboard decks can be divided into three main process steps: priming, painting, and drying. In the first process step priming, the skateboard decks are subjected to continuous immersion priming in a dip tank, in order to reduce irregularities and open pores on the skateboard deck surface. In preliminary tests the team identified the layer thickness of the primer as the main influencing factor in this process step. In the second process step, the final layer of paint is applied by continuously passing the skateboard decks through a paint booth, and coating them using robot-controlled spray nozzles. In preliminary tests the team identified the spray nozzle distance to the skateboard deck surface as the main influencing factor in this process step. However, these preliminary tests also showed that this distance should be as high as possible, as the gloss level of the paint layer improves, the greater the distance between the nozzles. In the subsequent third process step drying, the painted skateboard decks are passed through a continuously operating multi-zone dryer to remove the existing water content in the paint layer. In preliminary tests, the team identified the average drying temperature as the main influencing factor in this process step.

Recently however skateboard decks have often had to be scrapped, because an above-average number of paint layers were unable to achieve the required minimum adhesion strength. For this reason a quality team was formed to use the DOE Methodology, to identify the parameter effects and interaction effects, that have a significant effect on the adhesive force. The core topic in this Minitab tutorial will therefore be, to mathematically model and analyze the influence of these three parameters on the adhesive force by using a suitable statistical design of experiments, and to optimally adjust them using response value optimization, so that the required minimum adhesive force is achieved. Since the team had already found out during full-factorial preliminary tests, that non-linear cause-effect relationships exist, it has been decided to work with the so-called response surface design. The central topic in this Minitab tutorial will therefore be the response surface design. This will then also be applied in order to be able to describe non-linear cause-effect relationships. To this end, we will first learn what distinguishes a response surface design from a factorial or a differently fractionated experimental design. The important so-called star points, will also play a central role in this context, and we will get to know the difference between star points and center points, and experience that the so-called star point distance alpha, plays a very important role. We will learn what a central composite design is, and what a Box-Behnken design is. and by constructing 3-D scatter plot, we will discuss the corresponding mathematical requirements with regard to orthogonality, which a central composite design effective should ideally possess.

With the help of the so-called 3-D scatter plot, it will then also be very easy for us to understand the difference between star points, and center points. We will learn how to use the corresponding significance values from the hypothesis tests, to assess whether the corresponding terms are significant, or non-significant. And we will learn how to assess the corresponding effect sizes using the Pareto chart of standardized effects. We will then use the useful corresponding factor diagrams, to discuss the corresponding effect sizes, and the directions of the respective effects. With this knowledge we will then be able to determine the optimal parameter settings, as part of the final response optimization. In this context, we will also learn how to create a so-called contour plot, in order to define process-safe working ranges for the parameter settings, so that the required minimum adhesive force of the coating layer is still achieved, even in the event of unplanned process fluctuations. We will also create and discuss the useful graphic effect surface plot, to get a very good three-dimensional visual impression of the trend of our response variable depending on the influencing variables. At the end of this Minitab tutorial, we will be able to use all the available analysis results and in particular the corresponding confidence and prediction intervals, to make concrete recommendations to the technical management of Smartboard Company, on what the optimum parameter settings should be in order to achieve the required minimum adhesive force of the paint layer on the skateboard decks, even with these existing non-linear cause-and-effect relationships.

MAIN TOPICS MINITAB TUTORIAL 34, Part 1

- Response surface design, fundamentals
- Central composed experimental design
- Box- Behnken design
- 1/2 – central composed Response surface design
- 1/4 – centrally composed Response surface design
- 1/8 – centrally composed Response surface design
- Create a Response surface design plan
- Point types: cube points, center points, star points
- Identification of different point types in the 3D scatter diagram

MAIN TOPICS MINITAB TUTORIAL 34, part 2

- Analyze Response surface design
- Coded coefficients
- Main effect plots
- Interaction plots
- Coefficients of determination of the DOE model
- Interpretation of the analysis of variance
- Regression equation in non-coded units
- Pareto diagram of the standardized effects
- Residual analysis
- Response optimization
- Multiple Response Prediction
- Define process areas
- Contour plot
- Surface plot