-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
finish! #91
base: master
Are you sure you want to change the base?
finish! #91
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,23 +8,20 @@ For this assignment we will be using data from the Assistments Intelligent Tutor | |
|
||
#Install & call libraries | ||
```{r} | ||
install.packages("party", "rpart") | ||
|
||
#install.packages("party") | ||
library(rpart) | ||
library(party) | ||
``` | ||
|
||
## Part I | ||
```{r} | ||
D1 <- | ||
D1 <- read.csv("intelligent_tutor.csv", sep = ",", header = TRUE) | ||
``` | ||
|
||
##Classification Tree | ||
First we will build a classification tree to predict which students ask a teacher for help, which start a new session, or which give up, based on whether or not the student completed a session (D1$complete) and whether or not they asked for hints (D1$hint.y). | ||
```{r} | ||
|
||
c.tree <- rpart(action ~ hint.y + complete, method="class", data=D1) #Notice the standard R notion for a formula X ~ Y | ||
|
||
#Look at the error of this tree | ||
printcp(c.tree) | ||
|
||
|
@@ -40,42 +37,55 @@ We want to see if we can build a decision tree to help teachers decide which stu | |
|
||
#Visualize our outcome variable "score" | ||
```{r} | ||
|
||
hist(D1$score) | ||
``` | ||
|
||
#Create a categorical outcome variable based on student score to advise the teacher using an "ifelse" statement | ||
```{r} | ||
D1$advice <- | ||
D1$advice <- ifelse(D1$score <=0.4, "intervene", ifelse(D1$score > 0.4 & D1$score <=0.8, "monitor", "no action")) | ||
``` | ||
|
||
#Build a decision tree that predicts "advice" based on how many problems students have answered before, the percentage of those problems they got correct and how many hints they required | ||
```{r} | ||
score_ctree <- | ||
score_ctree <- ctree(factor(advice) ~ prior_prob_count + prior_percent_correct + hints, D1) | ||
?ctree | ||
``` | ||
|
||
#Plot tree | ||
```{r} | ||
|
||
plot(score_ctree) | ||
``` | ||
|
||
Please interpret the tree, which two behaviors do you think the teacher should most closely pay attemtion to? | ||
|
||
Interpret from this plot, the students who are asking for hints for less than 12 times and have less than 63% questions answered correctly before this session are very likely to need intervention (node7). Other than that, students who do not ask for any hints, while have answered less than 85 questions correctly are also quite likely need interventions(node 3). Therefore, I think "students that ask for less than 12 hints" and "have answered less than 85 questions correctly" are the two most important behaviors that teachers should pay close attention to. | ||
|
||
|
||
|
||
#Test Tree | ||
Upload the data "intelligent_tutor_new.csv". This is a data set of a differnt sample of students doing the same problems in the same system. We can use the tree we built for the previous data set to try to predict the "advice" we should give the teacher about these new students. | ||
|
||
```{r} | ||
#Upload new data | ||
|
||
D2 <- | ||
D2 <- read.csv("intelligent_tutor_new.csv", header = TRUE) | ||
|
||
#Generate predicted advice using the predict() command for new students based on tree generated from old students | ||
|
||
D2$prediction <- | ||
D2$prediction <- predict(score_ctree, D2) | ||
|
||
``` | ||
## Part III | ||
Compare the predicted advice with the actual advice that these students recieved. What is the difference between the observed and predicted results? | ||
|
||
```{r} | ||
# all the students in the new dataset achieved a score of 1.0, therefore, the accuracy of our model's prediction would be: | ||
mean(ifelse(D2$prediction == "no action", 1, 0)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Another way to calculate the accuracy of the model's prediction is to create a table or a confusion matrix. |
||
|
||
|
||
``` | ||
We can see that based on the model we've generated from the old data, the degree of accuracy predicting the new data is only 58%. I don't think this is a very accurate prediction. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Try to be more specific, as in, state whether the model is generalizable or does it overfit. |
||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Overall, great job. Keep up the good work! |
||
### To Submit Your Assignment | ||
|
||
Please submit your assignment by first "knitting" your RMarkdown document into an html file and then commit, push and pull request both the RMarkdown file and the html file. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!