**Dataset:**

halo1.dat

**Source:**

Landy and Sigall (1974). “Beauty is Talent: Task Evaluation as a Function of the Performer’s Physical Attraction,” Journal of Personality and Social Psychology, 29:299-304.

60 male undergraduates read an essay supposedly written by a female college freshman. They then evaluated the quality of the essay and the ability of its writer on several dimensions. By means of a photo attached to the essay, 20 evaluators were led to believe that the writer was physically attractive and 20 that she was unattractive. The remaining evaluators read the essay without any information about the writer’s appearance. 30 evaluators read a version of the essay that was well written while the other evaluators read a version that was poorly written. Significant main effects for essay quality and writer attractiveness were predicted and obtained.

**Description:**

Two Factor experiment to assess if appearance effects judgment of student’s essay. Appearance (Attractive/Control/Unattractive) and Essay Quality (Good/Poor) are factors (Pictures given with essay, except in control).

**Response:**

Grade on essay. Data simulated to match their means and standard deviations.

**Variables/Columns:**

Essay Quality 8 /* 1=Good, 2=Poor */

Student Attactiveness 16 /* 1= Attractive, 2=Control, 3=Unattractive */

Score 18-24

When we import halo.dat using read.delim() ,we get a single column V1 containing all the data values. The separation of V1 into 3 distinct columns and other subsequent transformations are done using functions from the very handy dplyr package, as shown below.

hal<-read.delim(file.choose(),header = FALSE,stringsAsFactors = FALSE) halo<-separate(halo,V1,c("sno","quality","attractiveness","score","dec")) halo<-unite(halo,scoretotal,score:dec,sep=".") halo<- select(halo,2:4)

Next we’ll convert the 2 independent variables (Essay Quality, Attractiveness) to factors and the dependent variable (Score) to numeric to carry out our ANOVA analysis.

halo$scoretotal<-as.numeric(halo$scoretotal) halo$quality<-factor(halo$quality) halo$attractiveness <- factor(halo$attractiveness)

We can test 3 hypotheses:

1. Are scores higher for more attractive students ?

2. Are scores higher for higher quality essay?

3. Whether student attractiveness and essay quality interact resulting in higher scores?

Let’s start by look at the means of our groups. Here, we will use the tapply() function to generate a table of means and use the results to plot a graph.

halo_mean<-tapply(halo$scoretotal,list(halo$attractiveness,halo$quality),mean) # Plot a bar-plot barplot(halo_mean, beside = TRUE, col = c("orange","blue","green"), main = "Mean Scores", xlab = "Essay Quality", ylab = "Score") # Add a legend legend("topright", fill = c("orange","blue","green"), title = "Attractiveness",c("1","2","3"), cex=0.5) > halo_mean 1 2 1 17.900 14.899 2 17.900 13.400 3 15.499 8.701

Visually, we can see that for a good quality essay,the students were evaluated most favorably when they were attractive or when their appearance was unknown , least when they were unattractive.For a poor grade essay,the students were evaluated most favorably when they were attractive, least when they were unattractive, and intermediately when their appearance was unknown.

Let’s explore and further refine our study by performing a 2-way ANOVA. But before we do that, we need to test the homogeneity of the variance assumption.The assumption must hold for the results of an ANOVA analysis to be valid.

leveneTest(halo$scoretotal~halo$quality*halo$attractiveness)

The result gives us a p-value of 0.2989 which means that the assumption of homogeneity of variance holds. There is not a significant difference in the variances across the groups.

And now for the ANOVA.

aov_halo<-aov(halo$scoretotal~halo$quality*halo$attractiveness) summary(aov_halo) > summary(aov_halo) Df Sum Sq Mean Sq F value Pr(>F) halo$quality 1 340.8 340.8 17.231 0.000118 *** halo$attractiveness 2 211.0 105.5 5.335 0.007687 ** halo$quality:halo$attractiveness 2 36.6 18.3 0.925 0.402832 Residuals 54 1067.9 19.8 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The very low p-values of quality and attractiveness indicate high significance of these 2 factors independently to drive the higher scores. But the high p-value of the interaction shows that both together do not drive higher scores.Since,we do not have a significant interaction effect , we do not need to follow it up to see exactly where the interaction is coming from.

We can now proceed with Post-hoc analysis to test the main effect pairwise comparisons via the following methods:

- TukeyHSD
- pairwise.t.test()
- Bonferroni Adjustment
- Holm’s adjustment

**Pair -wise comparison using TukeyHSD**

```
TukeyHSD(aov_halo,"halo$quality")
> TukeyHSD(aov_halo,"halo$quality")
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = halo$scoretotal ~ halo$quality * halo$attractiveness)
$
``````
halo$quality
diff lwr upr p adj
2-1 -4.766333 -7.068384 -2.464282 0.0001182
```

By studying the difference between the means and the Tukey adjusted p-value, ,GoodQ_mean-PoorQ_mean = 4.77 | p-value = 0.0001182

The mean score is higher for more good quality essays than for poor quality essays.

```
TukeyHSD(aov_halo,"halo$attractiveness")
> TukeyHSD(aov_halo,"halo$attractiveness")
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = halo$scoretotal ~ halo$quality * halo$attractiveness)
$
``````
halo$attractiveness
diff lwr upr p adj
2-1 -0.7495 -4.138616 2.6396163 0.8555096
3-1 -4.2995 -7.688616 -0.9103837 0.0095632
3-2 -3.5500 -6.939116 -0.1608837 0.0381079
```

By studying the difference between the means and the Tukey adjusted p-value, ,

Attractive_mean – Control_mean = 0.75 | p-value = 0.8555096

Attractive_mean – Unattractive_mean = 4.99 | p-value = 0.0095632

Control_mean – Unattractive_mean = 3.55 | p-value = 0.0381079

The mean score is higher for more attractive people than for unattractive people (The halo affect).

**Pairwise comparison using pairwise.t.test()**

```
pairwise.t.test(halo$scoretotal, halo$attractiveness, p.adj = "none")
> pairwise.t.test(halo$scoretotal, halo$attractiveness, p.adj = "none")
Pairwise comparisons using t tests with pooled SD
data: halo$scoretotal and halo$attractiveness
1 2
2 0.6397 -
3 0.0091 0.0297
P value adjustment method: none
>
```

With no adjustments, the Attractive-Unattractive (1-3) and Control-Unattractive (2-3) comparison are statistically significant,whereas, the Attractive-Control (1-2) comparison is not.This suggests that both the attractive and control groups scored superior to the Unattractive group, but that there is insufficient statistical support to distinguish between the Attractive and Control groups.

```
pairwise.t.test(halo$scoretotal, halo$quality, p.adj = "none")
> pairwise.t.test(halo$scoretotal, halo$quality, p.adj = "none")
Pairwise comparisons using t tests with pooled SD
data: halo$scoretotal and halo$quality
1
2 0.00027
P value adjustment method: none
>
```

With no adjustments, the Good Quality-Poor Quality (1-2) comparison is statistically significant. This suggests that good quality essays scored superior as compared to the poor quality essay group.

**Pairwise comparison using Bonferroni adjustment**

The Bonferroni adjustment simply divides the Type I error rate (.05) by the number of tests (in this case, three for attractiveness and 2 for essay quality). Hence, this method is often considered overly conservative. The Bonferroni adjustment can be made using p.adj = “bonferroni” in the pairwise.t.test() function.

```
pairwise.t.test(halo$scoretotal,halo$attractiveness,p.adj = "bonferroni")
> pairwise.t.test(halo$scoretotal,halo$attractiveness,p.adj = "bonferroni")
Pairwise comparisons using t tests with pooled SD
data: halo$scoretotal and halo$attractiveness
1 2
2 1.000 -
3 0.027 0.089
P value adjustment method: bonferroni
>
```

Using the Bonferroni adjustment, only the Attractive-Unattractive (1-3) group comparison is statistically significant. This suggests that the attractive group is treated superior to the Unattractive group, but that there is insufficient statistical support to distinguish between the Control-Unattractive (2-3) and the the Attractive-Control (1-2) group comparisons.

Notice that these results are more conservative than with no adjustment.

```
pairwise.t.test(halo$scoretotal,halo$quality,p.adj = "bonferroni")
> pairwise.t.test(halo$scoretotal,halo$quality,p.adj = "bonferroni")
Pairwise comparisons using t tests with pooled SD
data: halo$scoretotal and halo$quality
1
2 0.00027
P value adjustment method: bonferroni
>
```

Using the Bonferroni adjustment, the Good Quality-Poor Quality (1-2) comparison is statistically significant.This suggests that good quality essays scored superior as compared to the poor quality essay group.

**Pairwise comparison using Holm’s adjustment**

The Holm adjustment sequentially compares the lowest p-value with a Type I error rate that is reducedfor each consecutive test. In our case of attractiveness, this means that our first p-value is tested at the .05/3 level (.017),second at the .05/2 level (.025), and third at the .05/1 level (.05).This method is generally considered superior to the Bonferroni adjustment and can be employed using p.adj = “holm” in the pairwise.t.test() function.

```
pairwise.t.test(halo$scoretotal,halo$attractiveness,p.adj = "holm")
> pairwise.t.test(halo$scoretotal,halo$attractiveness,p.adj = "holm")
Pairwise comparisons using t tests with pooled SD
data: halo$scoretotal and halo$attractiveness
1 2
2 0.640 -
3 0.027 0.059
P value adjustment method: holm
>
```

Using the Holm procedure, our results are almost identical to using no adjustment.

**Conclusions:**

Based on our statistical tests, we can conclude that

1. The students were evaluated most favorably when they were attractive , least when they were unattractive.

2. Good quality essays scored higher than poor quality ones.

3. There is no statistical proof to show that student attractiveness and essay quality interact resulting in higher scores.