forked from core-methods-in-edm/assignment4
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathassignment4.Rmd
116 lines (74 loc) · 4.52 KB
/
assignment4.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
title: "Principle Component Aanalysis"
output: html_document
---
## Data
The data you will be using comes from the Assistments online intelligent tutoring system (https://www.assistments.org/). It describes students working through online math problems. Each student has the following data associated with them:
- id
- prior_prob_count: How many problems a student has answered in the system prior to this session
- prior_percent_correct: The percentage of problems a student has answered correctly prior to this session
- problems_attempted: The number of problems the student has attempted in the current session
- mean_correct: The average number of correct answers a student made on their first attempt at problems in the current session
- mean_hint: The average number of hints a student asked for in the current session
- mean_attempt: The average number of attempts a student took to answer a problem in the current session
- mean_confidence: The average confidence each student has in their ability to answer the problems in the current session
## Start by uploading the data
```{r}
D1 <-
#We won't need the id variable, so remove that.
```
## Create a correlation matrix of the relationships between the variables, including correlation coefficients for each pair of variables/features.
```{r}
#You can install the corrplot package to plot some pretty correlation matrices (sometimes called correlograms)
library(corrplot)
#Generate pairwise correlations
COR <- cor(D1)
corrplot(COR, order="AOE", method="circle", tl.pos="lt", type="upper",
tl.col="black", tl.cex=0.6, tl.srt=45,
addCoef.col="black", addCoefasPercent = TRUE,
sig.level=0.50, insig = "blank")
#Study your correlogram image and save it, you will need it later
```
## Create a new data frame with the mean_correct variables removed
```{r}
D2 <-
#The, scale and center your data for easier interpretation
D2 <- scale(D2, center = TRUE)
```
## Now run the PCA on the new data frame
```{r}
pca <- prcomp(D2, scale = TRUE)
```
## Although princomp does not generate the eigenvalues directly for us, we can print a list of the standard deviation of the variance accounted for by each component.
```{r}
pca$sdev
#To convert this into variance accounted for we can square it, these numbers are proportional to the eigenvalue
pca$sdev^2
#A summary of our pca will give us the proportion of variance accounted for by each component
summary(pca)
#We can look at this to get an idea of which components we should keep and which we should drop
plot(pca, type = "lines")
```
## Think about which components you would drop and make a decision
## Part II
```{r}
#Now, create a data frame of the transformed data from your pca.
D3 <- as.data.frame(pca$x)
#Attach the variable "mean_correct" from your original data frame to D3.
D4 <- cbind(D3, as.data.frame(D1$mean_correct))
#Now re-run your scatterplots and correlations between the transformed data and mean_correct. If you had dropped some components would you have lost important infomation about mean_correct?
COR2 <-
```
## Now print out the eigenvectors (often called loadings) for the components you generated:
```{r}
pca$rotation
#Examine the eigenvectors, notice that they are a little difficult to interpret. It is much easier to make sense of them if we make them proportional within each component
loadings <- abs(pca$rotation) #abs() will make all eigenvectors positive
sweep(loadings, 2, colSums(loadings), "/") #sweep() computes each row as a proportion of the column. (There must be a way to do this with dplyr()?)
#Now examine your components and try to come up with substantive descriptions of what some might represent?
#You can generate a biplot to help you, though these can be a bit confusing. They plot the transformed data by the first two components. Therefore, the axes represent the direction of maximum variance. Then mapped onto this point cloud are the original directions of the variables, depicted as red arrows. It is supposed to provide a visualization of which variables "go together". Variables that possibly represent the same underlying construct point in the same direction.
biplot(pca)
#Calculate values for each student that represent these your composite variables and then create a new correlogram showing their relationship to mean_correct.
```
# Part III
## Also in this repository is a data set and codebook from Rod Martin, Patricia Puhlik-Doris, Gwen Larsen, Jeanette Gray, Kelly Weir at the University of Western Ontario about people's sense of humor. Can you perform a PCA on this data?