The Statistics Unit provides professional support to FPL personnel that need assistance with designing experiments, analyzing data, mathematical modeling, using computers for data analysis, and summarizing experimental results through graphical displays of data. This support is free for all Forest Service funded activities and free to any cooperative activities with limited funding. The mission of the Statistics Unit as defined in its Research Work Unit Description is:

To enhance the integrity and efficiency of the Forest Products Laboratory's research efforts through the development, evaluation, and promotion of modern statistical methods.

With this brochure, we hope to introduce FPL employees to the services available from the Statistics Unit, explain how these services can be obtained, and encourage employees to use them. We also want to emphasize the importance of consulting with statisticians early in the research cycle.

The support activities provided by the Statistics Unit can be broadly classified into three categories -- (1) traditional statistical support, which includes experiment design and data analysis, (2) mathematical and computing support, and (3) in-depth collaborative research with FPL personnel.

The Statistics Unit provides traditional statistical support to FPL personnel that includes helping them design experiments, analyze data from experiments, and present results of experiments in publications and presentations. In addition, the Statistics Unit provides technical review of study plans and manuscripts. Members of the unit will even help write sections of study plans or manuscripts if needed. The Statistics Unit provides guidance to FPL personnel on the statistical aspects of Total Quality Management, Good Laboratory Practices, Quality Control, and Quality Assurance. In general, statistical support is available for any aspect of the research process. Probably the most important support the Statistics Unit provides is in design of experiments, which is discussed in more detail later.

The members of the Statistics Unit have expertise in areas other than pure statistics and are quite willing to share that expertise. In the past, we have provided support in each of the following areas:

- Traditional mathematical fields such as geometry, trigonometry, calculus, numerical analysis, and optimization
- Computer programs for mathematical and statistical analysis such as SLATEC, IMSL, SAS, and S-Plus
- Use of Unix workstations and microcomputers for mathematical and statistical analysis
- Programming support in FORTRAN, Perl, Java, C, Pascal, Lotus, GKS graphics, and Macintosh applications
- The development of computer programs that can be run over the World Wide Web

Research efforts by members of the Statistics Unit usually fall into two major categories, collaborative research and research in statistical methodology. In collaborative research, the statistician is an integral part of a research program requiring extensive statistical input. Research in statistical methodology is aimed at both developing new statistical procedures and evaluating existing statistical procedures when either is needed by research programs at FPL. Early involvement of the Statistics Unit in research studies is critical to efficient design of experiments and recognizing situations where a research support role might benefit the research program.

The Statistics Unit offers substantial statistical support in experiment design, developing study plans, and analyzing and summarizing experimental results.

This is probably the area in which statisticians can do the most to contribute to an efficient research program, and it is also our least utilized service. An hour or two spent with a statistician at the beginning of a project can prevent a great deal of heartache at the end of the project. Statisticians are trained to ask and help scientists answer a series of important and often overlooked questions:

- What, exactly, are the real objectives of the study?
- Are we asking the right questions?
- Are the data from the proposed study likely to yield a definitive answer to these questions?
- What is the target population that we are trying to characterize or about which we are trying to draw conclusions?
- Are we sampling from this target population?
- Are we drawing enough replicates from our populations to be able to detect differences that are practically significant?
- Are we drawing enough replicates from our populations to be able to produce characterizations that are sufficiently precise?
- Are our "replicates" true replicates or are they subsamples so that we end up characterizing a few panels or boards rather than the true target population of panels or boards?
- Are there "lurking covariables" (such as time, temperature, relative humidity, location, technician, specific gravity, ...) that, unrecorded or unaccounted for, can hide relations or produce spurious effects?
- Are there covariables (such as MOE, specific gravity, ...) that we can use to our advantage to reduce the number of replicates that we need to take?
- Would a pilot study be appropriate? Before jumping head first into a large and expensive study that involves multiple levels of multiple factors, a researcher should have a feeling for the variability of measurements, the linear or nonlinear way that variables affect the response, the relative importance of various variables. If the field is a new one, performing a series of small studies before engaging in one or two massive ones might be cost effective.
- How are the data to be collected, cleansed, and analyzed? Can the method of collection be streamlined? Will the analysis be able to be done given the likely data? Is the proposed analysis likely to yield definitive answers to the questions posed?

We strongly encourage scientists to stop by and discuss their study plans with us while they are still being formulated.

Members of the Statistics Unit are available to help write any aspect of a study plan where their expertise would help in the planning process. The Statistics Unit has a suggested framework for a study plan available to scientists who would like a guide. As an aid to the formulation of any study plan, we suggest that the following questions be addressed:

**What are the objectives of the study?****What is the target population?****What are the treatments, and what constitutes a control in the study?****What are the experimental units going to be?**Are we getting true replicates, or are we subsampling? How do we allocate treatments to the experimental units?**What are the important covariates?**(for example, time, temperature, relative humidity, specific gravity, ...) How do we intend to account for the effects of the covariates? Randomize? Block? Measure and include in an analysis of covariance?**What is the justification of the proposed sample sizes?****What uncertainty do we expect in our measurements?**What are the relevant coefficients of variation? The more variable the material is that we are working with, the more careful we must be in blocking and in measuring covariables, and the larger our sample sizes will have to be.**If we desire to identify differences among "treatments," how big do the differences have to be before it becomes important for us to detect them---5%, 10%, 20%...?****How certain do we want to be that we will detect the differences if they exist?**Greater certainty (greater statistical "power") generally requires larger sample sizes and thus more expensive experiments. On the other hand, if the experiment is not properly designed and sample sizes are inadequate, the experiment may not have a reasonable chance of success. It then becomes misleading or noninformative science, which is a cost to be balanced against the cost of extra samples. Also, a properly designed experiment, one that incorporates appropriate blocking schemes and measurement of covariables, can yield more information for smaller sample sizes.**If we want to characterize properties, how precise do these characterizations need to be?**How wide a confidence interval are we willing to accept on a mean or a fifth percentile? The answers to these questions determine the sample sizes that we must take.

**How are the data to be collected, entered, cleaned up, analyzed, and archived?**What packages will be used to analyze the data? On what machines? By whom? How will the analyses be summarized and presented? Will the analyses meet the objectives of the study? Will the data be merged into an existing database, and is all the relevant information being collected to incorporate it into an existing database?

Today, with the universal availability of microcomputers and statistical packages, many scientists like to do their own analyses. This is certainly a good idea for at least the initial stages of an analysis; a scientist has to get her/his hands dirty in the data to really get a feel for what is going on. There are circumstances, however, when a scientist might prefer to have the Statistics Unit assist in the analysis. The members of the Statistics Unit have access to powerful computer workstations and state-of-the-art statistical packages such as S-Plus and SAS. They also have expertise in writing custom FORTRAN, Perl, Java, C, and Pascal programs and in specialized computer graphics. Given a massive data set or a complex and potentially messy data analysis (lots of outliers, missing data, nonlinear models, ...), a scientist might wish to work with the Statistics Unit on the analysis.

James Evans | 608-231-9332 |

Cherilyn Hatfield | 608-231-9334 |

Vicki Herian | 608-231-9347 |

Patricia Lebow | 608-231-9331 |

Steve Verrill | 608-231-9375 |

We welcome your comments and suggestions.

This page was last modified on 4/30/03.