The `chick1`

dataset is a data frame consisting of 578 rows and 4 columns “weight” “Time” “Chick” & “Diet” which represents the progression of weight of several chicks. The little chicklings are each given a specific diet. There are four types of diet and the farmer wants to know which one fattens the chicks the fastest.

It’s time to do some exploratory statistics on the data frame!

Once the data has been read in using read.csv, we perform a few operations as shown below to make the data ggplot2 ready. The data by itself is pretty clean and doesn’t require much work .

chick1$Weight<-as.numeric(chick1$Weight) chick1$Time<- as.numeric(chick1$Time) chick1$Chick<-factor(chick1$Chick) chick1$Diet<-factor(chick1$Diet) chick1<-chick1[-579,] #remove the one NA

And now for the ggplot-ing .

Use ggplot() to map Time to x and Weight to y within the aes() function.Add col=diet to highlight the diet fed to the chicks.

Add geom_line() at the end to draw the lines.To draw one line per chick, add group=chick to the aes() of geom_line()

Finally, add the geom_smooth to add a smoothing curve that shows the general trend of the data.The gray area around the curve is a confidence interval, suggesting how much uncertainty there is in this smoothing curve.

ggplot(chick1,aes(x=Time,y=Weight,col=Diet)) + geom_line(aes(group=Chick)) + geom_smooth(lwd=2)

If we want to turn off the confidence interval, we can add an option to the `geom_smooth`

later; specifically “se=FALSE”, where “s.e.” stands for “standard error.”

ggplot(chick1,aes(x=Time,y=Weight,col=Diet)) + geom_line(aes(group=Chick)) + geom_smooth(lwd=2,se=FALSE)

**The visual clearly shows that Diet 3 fattens up the chicks the fastest.**

Now we have 4 groups of the independent variable i.e. Diet and one dependent variable. This is a good case to be analysed by ANOVA (Analysis of Variance). ANOVA is a commonly used statistical technique for investigating data by comparing the means of subsets of the data. In one-way ANOVA the data is sub-divided into groups based on a single classification factor.

Our assumptions here are :

# Dependent variable:Weight

# Independent variable : Diet 1,2,3,4 (treatment)

# Null hypothesis H0: all diets lead to equal weight gain

# Alternative hypothesis H1: All diets do not lead to equal weight gain

When we conduct an analysis of variance, the null hypothesis considered is that there is no difference in treatments mean, so once rejected the null hypothesis, the question is what treatment differs

ggplot(chick1,aes(x=Time,y=Weight,col=Diet)) + geom_line(aes(group=Chick)) + geom_smooth(lwd=2,se=FALSE)

Next, we need to test whether or not the assumption of homogeneity of variance holds true . The latter must hold true for the results of ANOVA to be valid. This is done using the Levene’s test.

leveneTest(chick1$Weight~chick1$Diet)

The test gives a high F-value of 9.6 and a low P value ,this means the data does not show homogeneity of variance i.e. we have** violated principles of ANOVA** and need to take steps to rectify this.We do this by **logarithmic adjustment**.

chick2<-chick1 chick2$Weight<-log(chick2$Weight)

With the log transformation , it gives a p value of 0.049 ~ 0.05 .A value of p = .05 is probably good enough to consider the homogeneity of variance assumption to be met . Now we can perform ANOVA knowing that the results would hold true.

aov2<-aov(chick2$Weight~chick2$Diet) summary(aov2)

The results show the f-value as 8.744 and p value as 1.12e-05 showing significant effect . As a result we have 8.744 times between group variance as within group variance. i.e.the variation of weight means among different diets is much larger than the variation of weight within each diet, and our p-value is less than 0.05 (as suggested by normal scientific standard). Hence we can conclude that for our confidence interval **we accept the alternative hypothesis H1** that there is a significant relationship between diets and weight gain.

The F-test showed a significant effect somewhere among the groups. However, it did not tell us which pairwise comparisons are significant. This is where **post-hoc**

tests come into play. They help you to find out which groups differ significantly from one other and which do not. More formally, post-hoc tests allow for multiple pairwise comparisons without inflating the type I error (i.e. rejecting the type-1 error when it is valid).

In R you can use Tukey’s procedure via the TukeyHSD() function. Input to TukeyHSD() is anova.

tukey<-TukeyHSD(aov2) > tukey Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = chick2$Weight ~ chick2$Diet) $`chick2$Diet` diff lwr upr p adj 2-1 0.15278600 -0.01463502 0.3202070 0.0879198 3-1 0.28030756 0.11288653 0.4477286 0.0001109 4-1 0.26747943 0.09914285 0.4358160 0.0002824 3-2 0.12752155 -0.06293542 0.3179785 0.3114949 4-2 0.11469342 -0.07656886 0.3059557 0.4112309 4-3 -0.01282813 -0.20409041 0.1784342 0.9981647

From the table above (looking at “diff” and “p adj” columns) we can see which diets have significant differences in weight from others. For example we can conclude that:

**There is no significant difference**in weight between groups 3-2 (p=0.311) ,groups 4-2 (p=0.411) , groups 4-3 (p=0.998) & groups 2-1 (p=0.0878).

**There is a significant difference in weight between groups**3-1 ( p=0.0001109) and 4-1 (p=0.0002824).

Finally, the results obtained above can also be visualized by plotting the the “tukey” object in R. Significant differences are the ones which not cross the zero value.

plot(tukey) # plot the confidence intervals.

Each line represents the mean difference between both groups with the according confidence interval. Whenever the confidence interval doesn’t include zero (the vertical line), the difference between both groups is significant.

I hope you found it helpful and as always, comments are welcome!

Thanks for stopping by!