Welcome to the R codes to Session 5 classroom examples. Pls try these at home. Use the blog comments section for any queries or comments.
There are 3 basic modules of direct managerial relevance in session 5 -
(a) Segmentation: We use cluster analysis tools in R (Hierarchical, k-means and model-based algorithms)
(b) Decision Trees: Using some sophisticated R algorithms (recursive partitioning, conditional partitioning, random forests)
(c) Targeting: Using both parametric (Multinomial Logit) and non-parametric (Machine learning based) algorithms.
Now it may well be true that one session may not do justice to this range of topics. So, even if it spills over into other sessions, no problemo. But I hope and intend to cover this well.
This blog-post is for Segmentation via cluster analysis. The other two will be discussed in a separate blog post for clarity.
There are many approaches to doing cluster analysis and R handles a dizzying variety of them. We'll focus on 3 broad approaches - Agglomerative Hierarchical clustering (under which we will do basic hierarchical clustering with dendograms), Partitioning (here, we do K-means) and model based clustering. Each has its pros and cons. Model based is probably the best around, highly recommended.
1. Cluster Analysis Data preparation
First read in the data. USArrests is pre-loaded, so no sweat. I use the USArrests dataset example throughout for cluster analysis.
#first read-in data# mydata = USArrests |
Data preparation is required to remove variable scaling effects. To see this, consider a simple example. If you measure weight in Kgs and I do so in Grams - all other variables being the same - we'll get two very different clustering solutions from what is otherwise the same dataset. To get rid of this problem, just copy-paste the following code.
# Prepare Data # mydata <- na.omit(mydata) # listwise deletion of missing mydata.orig = mydata #save orig data copy mydata <- scale(mydata) # standardize variables |
2. Now we first do agglomerative Hierarchical clustering, plot dendograms, split them around and see what is happening.
# Ward Hierarchical Clustering d <- dist(mydata, method = "euclidean") # distance matrix fit <- hclust(d, method="ward") plot(fit) # display dendogram |
Eyeball the dendogram. Imagine horizontally slicing through the dendogram's longest vertical lines, each of which represents a cluster. Should you cut it at 2 clusters or at 4? How to know? Sometimes eyeballing is enough to give a clear idea, sometimes not. Suppose you decide 2 is better. Then set the optimal no. of clusters 'k1' to 2.
k1 = 2 # eyeball the no. of clusters |
# cut tree into k1 clusters groups <- cutree(fit, k=k1) # draw dendogram with red borders around the k1 clusters rect.hclust(fit, k=k1, border="red") |
3. Coming to the second approach, 'partitioning', we use the popular K-means method.
Again, the Q arises, how to know the optimal no. of clusters? Eyeballing the dendogram might sometimes help. But at other times, what should you do? MEXL (and most commercial software too) requires you to magically come up with the correct number as input to K-means. R does one better and shows you a scree plot of sorts that shows how the within-segment variance (a proxy for clustering solution quality) varies with the no. of clusters. So with R, you can actually take an informed call.
# Determine number of clusters # wss <- (nrow(mydata)-1)*sum(apply(mydata,2,var)) for (i in 2:15) wss[i] <- sum(kmeans(mydata,centers=i)$withinss) plot(1:15, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares") # Look for an "elbow" in the scree plot # |
# Use optimal no. of clusters in k-means # k1=2 |
# K-Means Cluster Analysis fit <- kmeans(mydata, k1) # k1 cluster solution |
To understand a clustering solution, we need to go beyond merely IDing which individual unit goes to which cluster. We have to characterize the cluster, interpret what is it that's common among a cluster's membership, give each cluster a name, an identity, if possible. Ideally, after this we should be able to think in terms of clusters (or segments) rather than individuals for downstream analysis.
# get cluster means aggregate(mydata.orig,by=list(fit$cluster),FUN=mean) # append cluster assignment mydata1 <- data.frame(mydata.orig, fit$cluster) |
OK, That is fine., But can I actually, visually, *see* what the clustering solution looks like? Sure. In 2-dimensions, the easiest way is to plot the clusters on the 2 biggest principal components that arise. Before copy-pasting the following code, ensure we have the 'cluster' package installed.
# Cluster Plot against 1st 2 principal components # vary parameters for most readable graph library(cluster) clusplot(mydata, fit$cluster, color=TRUE, shade=TRUE,labels=2, lines=0) |
4. Finally, the last (and best) approach - Model based clustering.
'Best' because it is the most general approach (it nests the others as special cases), is the most robust to distributional and linkage assumptions and because it penalizes for surplus complexity (resolves the fit-complexity tradeoff in an objective way). My thumb-rule is: When in doubt, use model based clustering. And yes, mclust is available *only* on R to my knowledge.
Install the 'mclust' package for this first. Then run the following code.
# Model Based Clustering library(mclust) fit <- Mclust(mydata) fit # view solution summary |
fit$BIC # lookup all the options attempted classif = fit$classification # classifn vector mydata1 = cbind(mydata.orig, classif) # append to dataset mydata1[1:10,] #view top 10 rows # Use only if you want to save the output write.table(mydata1,file.choose())#save output |
Visualize the solution. See how exactly it differs from that for the other approaches.
fit1=cbind(classif) rownames(fit1)=rownames(mydata) library(cluster) clusplot(mydata, fit1, color=TRUE, shade=TRUE,labels=2, lines=0) |
To help characterize the clusters, examine the cluster means (sometimes also called 'centroids', for each basis variable.
# get cluster means cmeans=aggregate(mydata.orig,by=list(classif),FUN=mean); cmeans |
Now, we can do the same copy-paste for any other datasets that may show up in classwork or homework. I'll close the segmentation module here. R tools for the Targeting module are discussed in the next blog post. Any queries or comment, pls use the comments box below to reach me fastest.
Sudhir
Dear Prof
ReplyDeleteIn H Clust you have mentioned cutting horizontally along the longets lines. What does the line represent ?
-A
Hi A,
DeleteThe lines represent some function of 'between sum of squares' across clusters. Longer the lines, more separate or distant are the clusters that branch out.
Sudhir