Design and Analysis of Experiments by Douglas Montgomery. Heath Rushing

Читать онлайн книгу.

Design and Analysis of Experiments by Douglas Montgomery - Heath Rushing


Скачать книгу
image

      The p-value for the one-sided hypothesis test is 0.0073, which is less than the set α of 0.05. We therefore reject the null hypothesis and conclude that the mean normalized fluorescence for nerve tissue is greater than the mean normalized fluorescence for muscle tissue. Subject matter knowledge would need to determine if there is a practical difference; confidence intervals for the differences (reported in JMP) can be beneficial for this assessment.

      7. Select Window > Close All.

      1. Open Hardness-Testing.jmp

      2. Select Analyze > Matched Pairs.

      3. Select Tip 1 and Tip 2 for Y, Paired Response.

      4. Click OK.

image

      The p-value, Prob > |t| = 0.7976, indicates that there is no evidence of a difference in the performance of the two tips. This p-value is larger than the standard significance level of α = 0.05.

      5. Leave Hardness-Testing.jmp open for the next exercise.

      1. Return to the Hardness-Testing table opened in the previous example.

      2. Select Tables > Stack. This will create a file in long format with one observation per row. Most JMP platforms expect data to appear in long format.

      3. Select Tip 1 and Tip 2 for Stack Columns.

      4. Type “Depth” in the Stacked Data Column field.

      5. Type “Tip” in the Source Label Column field.

      6. Type “Hardness-Stacked” in the Output table name field.

      7. Click OK.

      8. Hardness-Stacked is now the current data table. Select Analyze > Fit Y by X.

      9. Select Depth for Y, Response and Tip for X, Grouping.

      10. Click OK.

      11. Click the red triangle next to One-way Analysis of Depth by Tip and select Means/Anova/Pooled t.

image

      The root mean square error of 2.315407 is the pooled standard deviation estimate from the t-test. Compared to the standard deviation estimate of 1.20 from the paired difference test, we see that blocking has reduced the estimate of variability considerably. Though we do not work through the details here, it would be possible to perform this same comparison for the Fluorescence data from Example 2.1.

      12. Leave Hardness-Stacked.jmp and the Fit Y by X output window open for the next exercise.

      Example 2.3 Testing for the Equality of Variances

      This example demonstrates how to test for the equality of two population variances. Section 2.6 of the textbook also discusses hypothesis testing for whether the variance of a single population is equal to a given constant. Though not shown here, the testing for a single variance may be performed in the Distribution platform.

      1. Return to the Fit Y by X platform from the previous example.

      2. Click the red triangle next to One-way Analysis of Depth by Tip and select Unequal Variances.

image

      3. Save Hardness-Stacked.jmp.

      The p-value for the F test (described in the textbook) for the null hypothesis of equal variances (with a two-sided alternative hypothesis) is 0.8393. The data do not indicate a difference with respect to the variances of depth produced from Tip 1 versus Tip 2. Due to the use of a slightly different data set, the F Ratio of 1.1492 reported here is different from the ratio of 1.34 that appears in the book. Furthermore, the textbook uses a one-sided test with an alternative hypothesis. That hypothesis is that the variance of the depth produced by Tip 1 is greater than that produced by Tip 2. Since the sample standard deviation from Tip 1 is greater than that from Tip 2, the F Ratios for the one- and two-sided tests are both equal to 1.1492, but the p-value for the one-sided test would be 0.4197.

      It is important to remember that the F test is extremely sensitive to the assumption of normality. If the population has heavier tails than a normal distribution, this test will reject the null hypothesis (that the population variances are equal) more often than it should. By contrast, the Levene test is robust to departures from normality.

      4. Select Window > Close All.

      3

      Experiments with a Single Factor: The Analysis of Variance

       Section 3.1 A One-way ANOVA Example

       Section 3.4 Model Adequacy Checking

       Section 3.8.1 Single Factor Experiment

       Section 3.8.2 Application of a Designed Experiment

       Section 3.8.3 Discovering Dispersion Effects

      In this chapter, the t-test is generalized to accommodate factors with more than two levels. The method of analysis of variance (ANOVA) introduced here allows us to study the equality of the means of three or more factor levels. ANOVA partitions the total sample variance into two parts: the variance explained by the factor under study, and the remaining, unexplained variance.

      The method makes several assumptions about the distribution of the random error term in the model. If the model structure represents the true structure of the process, the model residuals may be thought of as random numbers generated from the distribution of the random error term, which is typically assumed to be a normal distribution. Several diagnostics are available for the residuals. They may be plotted on a normal quantile plot to check the assumption of normality of the random error term. They may also be plotted against the predicted values: the residuals and predicted values ought to be independent, and no patterns should be present in the plot. ANOVA also assumes that the error terms are independent and identically distributed. This chapter considers two formal tests, Bartlett’s and Levene’s, for the homogeneity of residual variance across factor levels. If any of the residual diagnostics show abnormalities, a transformation of the response variable is often useful for improving the model fit.

      When the ANOVA test rejects the null hypothesis that all treatment means are equal, it is often necessary to know which factor levels are significantly different from each other. Special techniques are necessary for multiple comparisons of different linear combinations of factor level means in order to control the so-called experimentwise error rate. Examples are presented for Tukey’s HSD (honestly significant difference) test and the Fisher (Student’s t) least significant difference method. If one of the factors represents a control group, Dunnett’s test may be used to compare the control group with each of the other factor levels.

      Other topics covered include power analysis for ANOVA to determine a required sample size, an introduction to the random effects models that are useful when the factor levels are only a sample of a larger population, and an example of a nonparametric method. The Kruskal-Wallis test relaxes the assumption that the response distribution is normal in each factor level, though it does require that the distributions across factor levels have the same shape.

      The first example will illustrate how to build an ANOVA model from data imported into JMP. This entails specifying the response column, the factor column, and ensuring that the factor column is set to the nominal modeling type. Afterward, we will show how models may be designed in JMP, and how the appropriate modeling options are saved as scripts attached to the


Скачать книгу