SportsTurf

April 2014

SportsTurf provides current, practical and technical content on issues relevant to sports turf managers, including facilities managers. Most readers are athletic field managers from the professional level through parks and recreation, universities.

Issue link: https://read.dmtmag.com/i/282941

Contents of this Issue

Navigation

Page 8 of 51

www.stma.org The center of the bell is the mean and most of the data is usually centered on the mean. The red area represents this data and one standard deviation +/- from the mean, 68% of the data (34% on either side of the average). The green area represents two standard deviations +/- from the mean or 95% of the data (red plus green) under the curve. The blue area then represents three deviations +/- from the mean or 99% of the data. Since every set of data has a different mean and standard deviation, an infinite number of normal distribution curves exist. Confidence intervals (CI), usually set by the researcher, establish a level of confidence or reliability to an end result based on some treat- ment perhaps to a human being or plant in repeatable trials. The CI is represented by a percentage, so when we say, "we are 95% confident that the result of this herbicide application will provide 98% control of dandelion," we express that 95% of the observations will hold true. In practice, confidence intervals are typically stated at the 95% confi- dence level. However, they can be shown at several confidence levels like, 68%, 95%, and 99%. When a research trial is conducted, the confidence level is the complement of the respective level of signifi- cance, i.e. a 95% confidence interval reflects a significance level of 0.05, referred to as alpha (α). The level of confidence is often depend- ent on the number of observations with more observations yielding a higher level of confidence. When data is collected, researchers typically look for something unusual or out of the ordinary and often ask if this is significantly different from a norm. Will it or does this happen with a very small probability of happening just by chance? Least Significant Differ- ence (LSD) is a measure of significance usually with a level of sig- nificance (α= 0.05) denoted as LSD α= 0.05 =0.05 or LSD 0.05 . We will revisit the use of this term when we show an example of a data table and bar graph. EXPERIMENTAL DESIGNS How an experiment is designed can make the difference between the collection of good data and bad data. The objective of experiments is to make comparisons of treatments that will support a thought or hypothesis about an area of interest. Treatments can include the appli- cations of fertilizers or pesticides, the incorporation of a cultural prac- tice or the evaluation of disease resistant turfgrass cultivars or combinations thereof. While comparisons of treatments are impor- tant, so are comparisons to an untreated control to determine the true effects of each treatment if nothing was being applied. The untreated control establishes a baseline for comparison. Collecting good data and then applying the proper data analysis is important for drawing or making appropriate conclusions about the experiment. In experimental designs, data (measurements/observations) are usu- ally subject to various, uncertain external factors. Treatments and full experiments are usually repeated, replications, to help identify any sources of variation, to better estimate the true effects of the treatments thereby strengthening the reliability and validity of the experiment. Statistically, replications help to reduce experimental error due to un- known or uncontrollable factors (i.e. variations in soils). Replicating treatments within an experiment is as important as repeating entire ex- periments to see if results can be repeated with confidence. Random- ization is also an important component to experimental design. One way to minimize bias in an experiment is to randomize treatments. This will become clearer as we look at some experimental designs. Two common experimental designs that you may hear of in a semi- nar or conference presentation are illustrated below. Complete Randomized Block Designs are one of the simplest, most common experimental designs for field trials. Here, you may be looking at the effects of one type of treatment, i.e. herbicide effective- ness. Treatments can be replicated three, four or more times dependent on the type of trial it is. Disease trials tend to have more replications due to the high variability among treatments from replication to repli- cation. Treatments also remain in single blocks. You will note that seven treatments are completely randomized in each of three replications or blocks. The treatment numbers can corre- spond to a treatment list. Replicate 1 7 4 6 1 3 5 2 Replicate 2 6 4 1 7 5 3 2 Replicate 3 5 7 2 3 1 4 6 Treatment No. Treatments 1 Untreated control 2 Herbicide A, Rate 1 3 Herbicide A, Rate 2 4 Herbicide B, Rate 1 5 Herbicide B, Rate 2 6 Herbicide C, Rate 1 7 Herbicide C, Rate 2 Complete Randomized Block Design April 2014 | SportsTurf 9

Articles in this issue

Links on this page

Archives of this issue

view archives of SportsTurf - April 2014