THE NON-SCHOLARLY WRITE-UP : LeNet architecture applied to the MNIST dataset: 99% accuracy.

Standard

For about a week or so , I have been working on the ‘Digit Recognizer’ competition over at Kaggle. I started out with my favourite go-to algorithm: Random Forest, and eventually moved on to other implementations and variations including

  • KNN
  • KNN with PCA
  • XGBoost
  • deeplearning with H2O
  • GBM with H2O
  • Ensembling

And then I plateaued at 97.8% .

A quick google search not too different from ‘ improve score + Digit Recognizer +MNIST’, threw up a bunch of pages all of which seemed to talk about Neural Networks.I’m like huh? Isn’t that biology?

Sure is. Who’da thunk it!

Anyway ,I spent  considerable time pouring over a few AMAZING bookmark-able resources and implemented my first ConvNet (I feel so accomplished!).

The implementation in question is called the LeNet. One of the best convolutional networks is the LeNet architecture that is used to read zip codes, digits, etc.

2016-10-30_17-47-05.jpg

The model consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons.

Step 1: Load libraries

install.packages("drat")
require(drat)
drat::addRepo("dmlc")
install.packages("mxnet")
require(mxnet)

Step 2: Read the datasets

These are available from the Kaggle ‘Digit Recognizer’ competition page here.

Here every image is represented as a single row. The pixel range for each image lies between 0 and 255.

trainorig <-read.csv("C:/Users/Amita/Downloads/train.csv",header=T,sep=",")
testorig <-  read.csv("C:/Users/Amita/Downloads/test.csv",header=T,sep=",")

Step 3: Convert the training and testing datasets into matrices

train<-data.matrix(train)
test<-data.matrix(test)

Step 4: Extract the labels

train.x<-train[,-1]
train.y<-train[,1] # labels
test<-test[,-1]

Step 5: Scale the data and transpose  the matrices since mxnet seems to prefer observations in columns instead of rows.

train.x<-t(train.x/255)
test<-t(test/255)

The transposed matrix contains data in the form npixel x nexample.

Step 6: Convert the matrices into arrays for lenet

train.array <- train.x
 dim(train.array) <- c(28, 28, 1, ncol(train.x))
 test.array <- test
 dim(test.array) <- c(28, 28, 1, ncol(test))

Each input x is a 28x28x1 array representing one image, where the first two numbers represent the width and height in pixels, the third number is the number of channels (which is 1 for grayscale images, 3 for RGB images).

Step 7: Configure the structure of the network

# Convolutional NN
 data <- mx.symbol.Variable('data')
 devices<-mx.cpu()
 # first conv
 conv1 <- mx.symbol.Convolution(data=data, kernel=c(5,5), num_filter=20)
 relu1 <- mx.symbol.Activation(data=conv1, act_type="relu")
 pool1 <- mx.symbol.Pooling(data=relu1, pool_type="max",
 kernel=c(2,2), stride=c(2,2))
 # second conv
 conv2 <- mx.symbol.Convolution(data=pool1, kernel=c(5,5), num_filter=50)
 relu2 <- mx.symbol.Activation(data=conv2, act_type="relu")
 pool2 <- mx.symbol.Pooling(data=relu2, pool_type="max",
 kernel=c(2,2), stride=c(2,2))
 # first fullc
 flatten <- mx.symbol.Flatten(data=pool2)
 fc1 <- mx.symbol.FullyConnected(data=flatten, num_hidden=500)
 relu3 <- mx.symbol.Activation(data=fc1, act_type="relu")
 # second fullc
 fc2 <- mx.symbol.FullyConnected(data=relu3, num_hidden=10)
 # loss
 lenet <- mx.symbol.SoftmaxOutput(data=fc2)

Step 8: Train the model

 mx.set.seed(0)
 
 model <- mx.model.FeedForward.create(lenet, X=train.array, y=train.y,
 ctx=devices, num.round=20, array.batch.size=100,
 learning.rate=0.05, momentum=0.9, wd=0.00001,
 eval.metric=mx.metric.accuracy,
 epoch.end.callback=mx.callback.log.train.metric(100))

Step 9: predict on the test dataset and calculate accuracy

preds <- predict(model, test.array) 
pred.label <- max.col(t(preds)) - 1
sum(diag(table(test_org[,1],pred.label)))/8400

Step 10: Predict on the final test dataset and submit to Kaggle

# predict on the kaggle dataset 
 testorig <- as.matrix(testorig)
 testorig<-t(testorig/255)
 testorig.array <- testorig
 dim(testorig.array) <- c(28, 28, 1, ncol(testorig))

 predtest<-predict(model,testorig.array)
 
 predlabel<-max.col(t(predtest))-1
 predictions <- data.frame(ImageId=1:nrow(testo), Label=predlabel)
write.csv(predictions, "CNN.csv",row.names=FALSE)

and *ba-dum-tsss* !!! a 0.99086 !

If anybody has any ideas on how to improve this score , please share! TIA!

References:

The non-scholarly write-up : Logistic Regression with XGBoost.

Standard

This post is a long time coming.

 UPDATE: I have inched my way to the top 13% of the titanic competition (starting out at the ‘top’ 85%, who’d a thunk it. I love persevering. :D)

Anyway.

My last attempt involved XGBoost (Extreme Gradient Boosting) , which did not beat my top score – It barely scraped past a 77%. That being said, I thought it deserved a dedicated post considering I have achieved great results with the algorithm on other Kaggle competitions.

In a nutshell, it

  • is a very, very fast version of the GBM,
  • needs parameter tuning which can get pretty frustrating (But hey, patience is a virtue!)
  • Supports cross validation
  • is equiped to help find the variable importance
  • is robust to outliers and noisy data

 

Cutting to the chase.

Step 1: Load libraries

require(xgboost)
require(Matrix)

Step 2: Read the datasets

dat<-read.csv("C:/Users/Amita/Downloads/train (1).csv",header=T,sep=",",
              na.strings = c(""))
test <- read.csv("C:/Users/Amita/Downloads/test (1).csv",header=T,sep=",",
              na.strings = c(""))

Step 3:  Process the datasets

This is the same process as outlined in a previous blog post.

Step 4:  Extract the response variable column

label <- dat$Survived
dat <- dat[,-2] # remove the 'Survived' response column from the training dataset

Step 5:  Combine the training and test datasets

combi <- rbind(dat,test)

Step 6: Create a sparse matrix  to ‘dummify’ the categorical variables i.e. convert all categorical variables to binary

One thing to remember with XGBoost is that it ONLY works with numerical data types. So datatype conversion is necessary before you proceed with model building.

data_sparse <- sparse.model.matrix(~.-1, data = as.data.frame(combi))
cat("Data size: ", data_sparse@Dim[1], " x ", data_sparse@Dim[2], " \n", sep = "")

If you’re familiar with the ‘caret’ package, it has a pretty cool dummyVars function do this exactly what we did above.

# dummify the data
dummify <- dummyVars(" ~ .", data = combi)
finaldummy <- data.frame(predict(dummify, newdata = combi))

Here, dummyVars will transform all characters and factors columns (the function never transforms numeric columns) and return the entire data set.

 

Step 7: Divide the dummified back into train and test

dtrain <- xgb.DMatrix(data = data_sparse[1:nrow(dat), ], label = label)
dtest <- xgb.DMatrix(data = data_sparse[(nrow(dat)+1):nrow(combi), ])

Step 8: Cross Validate

In order to evaluate the overfit and underfit of the models,we compute cross validation error.

set.seed(12345678) # for reproducibility

cv_model <- xgb.cv(data = dtrain,
 nthread = 8,  # number of threads allocated to the execution of XGBoost
 nfold = 5,  # the original data is divided into 4 equal random samples
 nrounds = 1000000, # number of iterations
 max_depth = 6, # maximum depth of a tree
 eta = 0.05, # controls the learning rate. 0 < eta < 1
 subsample = 0.70, #subsample ratio of the training instance. 
 colsample_bytree = 0.70, #subsample ratio of columns when constructing each tree
 booster = "gbtree", # gbtree or gblinear
 eval_metric = "error", #binary classification error rate
 maximize = FALSE, #maximize=TRUE means the larger the evaluation score the better
 early_stopping_rounds = 25, # training with a validation set will 
                             # stop if the performance keeps getting worse 
                             # consecutively for k rounds.
 objective = "reg:logistic", # logistic regression
 print_every_n = 10, # output is printed every 10 iterations
 verbose = TRUE) # print the output 

Everything you need to know about the xgb.cv parameters and beyond is answered here https://github.com/dmlc/xgboost/blob/master/doc/parameter.md

Step 9: Build model

temp_model <- xgb.train(data = dtrain,
 nthread = 8,
 nrounds = cv_model$best_iteration,
 max_depth = 6,
 eta = 0.05,
 subsample = 0.70,
 colsample_bytree = 0.70,
 booster = "gbtree",
 eval_metric = "error",
 maximize = FALSE,
 objective = "reg:logistic",
 print_every_n = 10,
 verbose = TRUE,
 watchlist = list(trainrep = dtrain))

Easy reference : https://rdrr.io/cran/xgboost/man/xgb.train.html

Step 10: Predict ‘Survived’ values.

prediction <- predict(temp_model,dtest)
prediction <- ifelse(prediction>0.5,1,0)

 

Step 11: Check  and plot the variable importance

Certain predictors drag down the performance of the model even though it makes complete sense gut-wise to keep them there. On a couple of occasions,variable importance has helped me decide the relevance of the predictors, which positively impacted the accuracy of my model.

importance <- xgb.importance(feature_names = data_sparse@Dimnames[[2]], 
              model = temp_model) #Grab all important features
xgb.plot.importance(importance) #Plot

2016-10-27_16-52-20.jpg

 

For everything XGBoost, I frequented this page and this page . Pretty thorough resources, IMHO.

Annnd, that’s pretty much it!

Go get ’em!

Top 16% : How to plateau using the ‘Feature Engineering’ approach.

Standard

Ladies and Gents! I am now placed in the top 16% of the leaderboard rankings with a score of .80383.

2016-10-14_10-08-27

 

I have also plateaued horribly. No matter what other features I try to ‘engineer’ my score just won’t budge. It get’s worse, sure, but never better. Bummer.

Everything pretty much remains the same as the previous post in terms of data reading and cleaning. In this post, let’s look at what I did differently.

This attempt was a departure from applying the algorithms as is and hoping for a better prediction (Admit it.We’re all guilty.) This time I incorporated the ‘human’ element – I even tried to recall scenes from the movie for that extra insight(Still unfair how Rose hogged the entire wooden plank).

Some of the theories I considered:

  • Women and children were given priority and evacuated first.
  • Mothers would look out for their children.
  • First class passengers were given priority over those in 2nd or 3rd class.
  • Women and children were probably given priority over males in every class.
  • Families travelling together probably had a better chance of survival since they’d try to stick together and help each other out.
  • Older people would have trouble evacuating and hence, would have lower odds of survival.

 

Also, this time around, I played around with the ‘Name’ and ‘Cabin’ variables and that made a huge diffference!

So what you need to do to plateau with an 80.4% prediction is as follows:

Identify the unique titles and create a new variable unique:

# check for all the unique titles 

unique <- gsub(".*?,\\s(.*?)\\..*$","\\1",dat$Name)

dat$unique<- unique
dat$unique[dat$unique %in% c("Mlle","Mme")] <-"Mlle"
dat$unique[dat$unique %in% c('Capt', 'Don', 'Major', 'Sir')] <- 'Sir'
dat$unique[dat$unique %in% c('Dona', 'Lady', 'the Countess','Jonkheer')] <- 'Lady'

table(dat$unique) # check the distribution of different titles

# passenger’s title 
dat$unique <- factor(dat$unique)

Identify the children and create a new variable isChild  :

dat$ischild <- factor(ifelse(dat$Age<=16,"Child","Adult"))

Identify the mothers and create a new variable isMother:

dat$isMother<- "Not Mother"
dat$isMother[dat$Sex=="female" & dat$Parch>0 & unique!="Miss"] <- "Mother"
dat$isMother<- factor(dat$isMother)

Uniquely identify the Cabins: This variable leads to somewhat of an overfit.

dat$Cabin <- substr(dat$Cabin,1,1)
dat$Cabin[dat$Cabin %in% c("F","G","T",NA)] <- "X"
dat$Cabin<- factor(dat$Cabin)

Compute the family size and create a new variable familysize :

dat$familysize <- dat$SibSp + dat$Parch + 1

Use the ‘familysize‘ variable and the surname of the passenger to designate the family size as “Small” or “Big” in the new variable unit :

pass_names <- dat$Name
extractsurname <- function(x){
  if(grepl(".*?,\\s.*?",x)){
  gsub("^(.*?),\\s.*?$","\\1",x)
 }
}

surnames <- vapply(pass_names, FUN=extractsurname,FUN.VALUE = character(1),USE.NAMES = F)
fam<-paste(as.character(dat$familysize),surnames,sep=" ")


famsize<- function(x){
 if(substr(x,1,2) > 2){
 
 x <- "Big"
 }else{
 x <- "Small"
 }
}

unit <- vapply(fam, FUN=famsize,FUN.VALUE = character(1),USE.NAMES = F)
dat$unit <- unit
dat$unit <- factor(dat$unit)

 

Split the ‘dat’ dataset into train and test (60 : 40 split) and fit the randomforest model.

n<- nrow(dat)
shuffled <- dat[sample(n),]

traindat <- shuffled[1:round(0.6*n),]
testdat<- shuffled[(round(0.6*n) + 1):n,]

dim(traindat)
dim(testdat)

require(caret)
require(ranger)
model <- train(
 Survived ~.,
 tuneLength = 50,
 data = traindat, method ="ranger",
 trControl = trainControl(method = "cv", number = 5, verboseIter = TRUE)
)

pred <- predict(model,newdata=testdat[,-2])
conf<- table(testdat$Survived,pred)
accuracy<- sum(diag(conf))/sum(conf)
accuracy

Using the model to predict survival (minus Cabin) gives us 83.14% accuracy on our test data’testdat’ and 80.34% on Kaggle.

Using the model to predict survival (with Cabin) gives us 83.71% accuracy on our test data’testdat’ which drops to around 79% on Kaggle.

Although, I still haven’t tinkered around with ‘Fare’, ‘Ticket’, and ‘Embarked’ (the urge to do so is strong), I think I’ll just leave it alone for the time being – but I will be revisiting for that elusive ‘eureka’ moment!

You can find the code here .

 

Learning from Disaster – The Random Forest Approach.

Standard

Kaggle update:

I’m up 1,311 spots a week from my previous week’s submission. Yay!

2016-10-09_17-11-01

Having tried logistic regression the first time around, I moved on to decision trees and KNN. But unfortunately, those models performed horribly and had to be scrapped.

Random Forest seemed to be the buzz word around the Kaggle forums, so I obviously had to try it out next. I took a couple of days to read up on it, worked out a few examples on my own before re-taking a stab at the titanic dataset.

The ‘caret’ package is a beauty. Seems to be the most widely used package for supervised learning too. I cannot get over how simple and consistent it makes predictive modelling.So far I have been able to do everything from data splitting, to data standardization, to model building, to model tuning – all using one package. And I am still discovering all that it has to offer. Pretty amazing stuff.

I will give you a super quick walk-through of how I applied the random forest algorithm and then go enjoy whatever’s left of my Sunday.

 

Read In The Data:

dat<-read.csv("C:/Users/Amita/Downloads/train (1).csv",header=T,sep=",",
     na.strings = c(""))
test <- read.csv("C:/Users/Amita/Downloads/test (1).csv",header=T,sep=",",
     na.strings = c(""))

Check For Missing Values:

sapply(dat,function(x){sum(is.na(x))}) 

sapply(test,function(x){sum(is.na(x))})

2016-10-09_17-50-11.jpg

The variable ‘Cabin’ seems to have the most missing values and is quite beyond repair – so we’ll drop it. Also, I really don’t think ‘Name’ and ‘Ticket’ could possibly have any relation to the odds of surviving. So we’ll drop that as well. (So reckless! :D)

‘Age’ has quite a few missing values as well, but I have a hunch we’ll need that .So we need to replace the missing values there.

 

dat[is.na(dat$Age),][6]<- mean(dat$Age,na.rm=T)
dat <- dat[,-c(4,9,11)]

test[is.na(test$Age),][5]<- mean(test$Age,na.rm=T) 
test <- test[,-c(3,8,10)]

 

Next, we’ll split the complete training dataset ‘dat’ into two sub-datasets which we shall use for testing our model. Let’s go for a 60:40 split.

set.seed(50)
n<- nrow(dat)
shuffled <- dat[sample(n),]
traindat <- shuffled[1:round(0.6*n),]
testdat<- shuffled[(round(0.6*n) + 1):n,]

 

For this tutorial, we need to install the ‘caret’ package. I am not going to use the ‘randomforest’ package , but instead use the ‘ranger’ package which is supposed to provide a much faster implementation of the algorithm.

install.packages("caret")
install.packages("ranger")
library(caret)
library(ranger)

A little more cleaning prompted by errors thrown along the way. Gotta remove all NAs.

sum(is.na(traindat))
sum(is.na(testdat))

traindat[is.na(traindat$Embarked),][["Embarked"]]<-"C"
testdat[is.na(testdat$Embarked),][["Embarked"]]<-"C"

testdat$Survived<-factor(testdat$Survived)
traindat$Survived<-factor(traindat$Survived)

Convert the ‘Survived’ variable to a factor so that caret builds a classification instead of a regression model.

testdat$Survived<-factor(testdat$Survived)
traindat$Survived<-factor(traindat$Survived)

 

Build The Model:

model <- train(
 Survived ~.,
 tuneLength = 50,
 data = traindat, method ="ranger",
 trControl = trainControl(method = "cv", number = 5, verboseIter = TRUE)
)

As you can see, we are doing a bunch of things in one statement.

The model being trained uses ‘Survived’ as the response variable and all others as predictors. The input dataset is ‘traindat’. The tuneLength argument to caret::train() tells train to explore more models along it’s default tuning grid. A higher value of tuneLength means more accurate results since it evaluates more models along it’s default tuning grid , but it also means that it’ll take longer. caret supports many types of cross-validation, and you can specify which type of cross-validation and the number of cross-validation folds with the trainControl() function, which you pass to the trControl argument in train(). In our statement, we are specifying a 5-fold cross validation. verboseIter =TRUE just shows the progress of the algorithm.

2016-10-09_18-18-28.jpg

The table shows different values of mtry along with their corresponding average accuracies . Caret automatically picks the value of the hyperparameter ‘mtry’ that was the most accurate under cross-validation (mtry = 5 in our case).

We can also plot the model to visually inspect the accuracies of the various mtry values. mtry =5 has the max average accuracy of 81.6%.

 

2016-10-09_18-20-30

Make Predictions on ‘testdat’ :

Let’s apply the model to predict the survival on our test dataset ,’testdat’, which is 40% of our whole training dataset.

pred <- predict(model,newdata=testdat[,-2])

#create confusion matrix
conf<- table(testdat$Survived,pred)

#compute accuracy
accuracy<- sum(diag(conf))/sum(conf)
accuracy

The accuracy is returned at 80.8%. Pretty close to what we saw above.

 

And finally ,

Make Predictions on the Kaggle test dataset, ‘test’.

test$Survived <- predict(model, newdata = test)
submit <- data.frame(PassengerId = test$PassengerId, Survived = test$Survived)
write.csv(submit, file = "submissionrf.csv", row.names = FALSE)

 

 

Get Result :

2016-10-09_18-34-08

77.5 % as opposed to last week’s score of  75.86 % .

Not bad.

 

We’ll make it better next week.

Meanwhile, please feel free to leave any pointers for me in the comments section below.I am always game for guidance and feedback!

 

P.S.  I have been really bad about uploading code to github – but I’ll get around to it in a day or two and put up a link here – I promise!

 

Binomial Logistic Regression.

Standard

I’m officially a Kaggler.

Cut to the good ol’ Titanic challenge. Ol’ is right – It’s been running since 2012 and ends in 3 months! I showed up late to the party. Oh well, I guess it’s full steam ahead from now on.

The competition  ‘Machine Learning from Disaster’ asks you to apply machine learning to analyse and predict  which passengers survived the Titanic tragedy. It is placed as knowledge competition.

Since I am still inching my way up the learning curve, I tried to see what I could achieve with my current tool set. For my very quick first attempt, it seemed like a no-brainer to apply out of the box logistic regression. For those in the competition, this approach, got me at around 75.something% and  placed me at 4,372 of 5172 entries. I have 3 months to better this score. And better it, I shall!

So essentially how this works is that you download the data from Kaggle. 90% of it (889 rows)  is flagged as training data and the rest is test data(418 rows). You need to build your model, predict survival on the test set and pass the data back to Kaggle which computes a score for you and places you accordingly on the ‘Leaderboard’.

The data:

Since we’re working with real world data, we’ll need to take into account the NAs, improper formatting, missings values et al.

After reading in the data, I ran a check to see how many entries had missing values. The simple sapply() sufficed in this case.

dat<-read.csv("C:/Users/Amita/Downloads/train (1).csv",header=T,sep=","
      ,na.strings = c(""))
sapply(dat,function(x){sum(is.na(x))})

The column ‘Cabin’ seems to have the most missing values – like a LOT, so I ditched it. ‘Age’ had quite a few missing values as well , but it seems like a relevant column.

I went ahead and dealt with the missing values by replacing them with the mean of the present values in that column.Easy peasy.

dat[is.na(dat$Age),][6]<- mean(dat$Age,na.rm=T)
dat <- dat[,-11]

 

Next, I divided the training dataset into two – ‘traindat’ and ‘testdat’. The idea was to train the prospective model on the traindat dataset and the predict using the rows in testdat. Computing the RMSE would then give us an idea about the performance of the model.

set.seed(30)
indices<- rnorm(nrow(dat))>0
traindat<- dat[indices,]
testdat<-dat[!indices,]
dim(traindat)
dim(testdat)

 

Structure wise, except for a couple of columns that had to be converted into factors,  the datatypes were on point.

testdat$Pclass<- factor(testdat$Pclass)
testdat$SibSp <- factor(testdat$SibSp)
testdat$Parch <- factor(testdat$Parch)
testdat$Survived<-factor(testdat$Survived)
traindat$Pclass<- factor(traindat$Pclass)
traindat$SibSp <- factor(traindat$SibSp)
traindat$Parch <- factor(traindat$Parch)
traindat$Survived<-factor(traindat$Survived)

 

The model:

Since, the response variable is a categorical variable with only two outcomes, and the predictors are both continuous and categorical, it makes it a candidate for conducting binomial logistic regression.

mod <- glm(Survived ~ Pclass + Sex + Age + SibSp+ Parch + Embarked + Fare ,
       family=binomial(link='logit'),data=traindat)

require(car)
Anova(mod)

2016-10-02_21-06-32.jpg

The result shows significance values for only ‘Pclass’, ‘Sex’, ‘Age’, ‘SibSp’ , so we’ll build a second model with just these variables and use that for further analysis.

mod2 <- glm(Survived ~ Pclass + Sex + Age + SibSp ,
        family=binomial(link='logit'),data=traindat)
Anova(mod2)

Let’s visualize the relationships between the response and predictor variables.

2016-10-02_21-11-49.jpg

  • The log odds (since we used link= ‘logit’) of survival seems to decline as the passenger’s class decreases.
  • Women have a higher log odds of survival than men.
  • Higher the age gets, lower the log odds of survival get.
  • The number of siblings/spouses aboard* also affects the log odds of survival. The log odds for numbers 5 and 8 can go either way indicated by the wide CIs. (*needs to be explored more).

 

Model Performance:

We will test the peformance of mod2 in predicting ‘Survival’ on a new set of data.

pred <- predict(mod2,newdata=testdat[,-2],type="response")
pred <- ifelse(pred > 0.5,1,0)
testdat$Survived <- as.numeric(levels(testdat$Survived))[testdat$Survived]
rmse <- sqrt((1/nrow(testdat)) * sum( (testdat$Survived - pred) ^ 2))
rmse  #0.4527195

error <- mean(pred != testdat$Survived)
print(paste('Accuracy',1-error)) #81% accuracy

81.171% passenger has been correctly classified.

But when I used the same model to run predictions on Kaggle’s test dataset, the uploaded results fetched me a 75.86 %. I’m guessing the reason could be the arbit split ratio between the ‘traindat’ and ‘testdat’. Maybe next time I’ll employ some sort of bootstrapping.

Well, this is pretty much it for now. I will attempt to better my score in the upcoming weeks (time permitting) and in the event I am successful, I shall add subsequent blog posts and share my learnings (Is that even a word?!).

One thing’s for sure though –  This road is loooooong, is long , is long . . . . .

😀

Later!