# Using RMixtComp with mixed and missing data

#### 2021-03-29

Unsupervised classification is illustrated on the titanic dataset. It is a data.frame with 1309 observations and 8 variables containing information on the passengers of the Titanic. Each observation represents a passenger described by a set of real variables: age in years (age), ticket price in pounds (fare), a set of counting variables: number of siblings/spouses aboard (sibsp), number of parents/children aboard (parch) and a set of categorical variables: sex, ticket class (pclass), port of embarkation and a binary variable indicating if the passenger survived (survived). Furthermore, the dataset contains missing values for three variables: age, fare and embarked.

library(RMixtComp)
data(titanic)
print(titanic[c(1, 16, 38, 169, 285, 1226),])
##      pclass survived    sex  age sibsp parch     fare embarked
## 1       1st        1 female 29.0     0     0 211.3375        S
## 16      1st        0   male   NA     0     0  25.9250        S
## 38      1st        1   male   NA     0     0  26.5500        S
## 169     1st        1 female 38.0     0     0  80.0000     <NA>
## 285     1st        1 female 62.0     0     0  80.0000     <NA>
## 1226    3rd        0   male 60.5     0     0       NA        S

## Step 1: Data Preparation

First, the dataset must be converted in the MixtComp format. Categorical variables must be numbered from 1 to the number of categories (e.g. 3 for embarked). This can be done using the refactorCategorical function that takes in arguments the vector containing the data, the old labels and the new labels. Totaly missing values must be indicated with a ?.

titanicMC <- titanic
titanicMC$sex <- refactorCategorical(titanic$sex, c("male", "female"), c(1, 2))
titanicMC$pclass <- refactorCategorical(titanic$pclass, c("1st", "2nd", "3rd"), c(1, 2, 3))
titanicMC$embarked <- refactorCategorical(titanic$embarked, c("C", "Q", "S"), c(1, 2, 3))
titanicMC$survived <- refactorCategorical(titanic$survived, c(0, 1), c(1, 2))
titanicMC[is.na(titanicMC)] = "?"
head(titanicMC)
##   pclass survived sex    age sibsp parch     fare embarked
## 1      1        2   2     29     0     0 211.3375        3
## 2      1        2   1 0.9167     1     2   151.55        3
## 3      1        1   2      2     1     2   151.55        3
## 4      1        1   1     30     1     2   151.55        3
## 5      1        1   2     25     1     2   151.55        3
## 6      1        2   1     48     0     0    26.55        3

The dataset is splitted in 2 datasets for illustrating learning and prediction.

indTrain <- sample(nrow(titanicMC), floor(0.8 * nrow(titanicMC)))
titanicMCTrain <- titanicMC[indTrain, ]
titanicMCTest <- titanicMC[-indTrain, ]

Then, as all variables are stored as character in a data.frame, a model object indicating which model to use for each variable is created. In this example, a gaussian model is used for age and fare variables, a multinomial for sex, pclass, embarked and survived, a Poisson for sibsp and parch.

model <- list(fare = "Gaussian", age = "Gaussian", pclass = "Multinomial", survived = "Multinomial",
sex = "Multinomial", embarked = "Multinomial", sibsp = "Poisson", parch = "Poisson")

## Step 2: Learning

We choose to run the clustering analysis for 1 to 20 clusters with 3 runs for every number of clusters. These runs can be parallelized using the nCore parameter.

resTitanic <- mixtCompLearn(titanicMCTrain, model, nClass = 1:20, nRun = 3, nCore = 1)

## Step 3: Interpretation and Visualization

summary and plot functions are used to have an overview of the results for the best number of classes according to the chosen criterion (BIC or ICL). If this number is not the one desired by the user, it can been changed via the parameter nClass.

The summary displays the number of clusters chosen and some outputs as the discriminative power indicating the variables that contribute most to class separation and parameters associated with the 3 most discriminant variables.

summary(resTitanic)
## ############### MixtCompLearn Run ###############
## nClass: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## Criterion used: BIC
##             1         2         3         4         5         6         7
## BIC -14402.77 -12807.94 -12081.82 -11861.55 -11672.19 -11619.08 -11449.07
## ICL -14402.77 -12841.96 -12096.52 -11921.96 -11724.80 -11686.01 -11537.20
##             8         9        10        11        12        13        14
## BIC -11520.15 -11489.33 -11469.35 -11472.09 -11375.19 -11425.32 -11460.93
## ICL -11579.53 -11550.58 -11538.77 -11551.15 -11491.42 -11568.60 -11567.78
##            15        16        17        18        19        20
## BIC -11501.76 -11584.42 -11442.32 -11598.08 -11536.71 -11559.30
## ICL -11615.10 -11710.87 -11606.24 -11756.62 -11652.34 -11680.97
## Best model: 12 clusters
## ########### MixtComp Run ###########
## Number of individuals: 1047
## Number of variables: 8
## Number of clusters: 12
## Mode: learn
## Time: 0.436 s
## SEM burn-in iterations done: 50/50
## SEM run iterations done: 50/50
## Observed log-likelihood: -10836.28
## BIC: -11375.19
## ICL: -11491.42
## Discriminative power:
##     fare   pclass    parch    sibsp embarked      sex      age survived
##    0.595    0.423    0.185    0.167    0.140    0.138    0.136    0.130
## Proportions of the mixture:
## 0.077 0.075 0.048 0.089 0.048 0.054 0.054 0.047 0.161 0.187 0.034 0.125
## Parameters of the most discriminant variables:
## - fare: Gaussian
##          mean      sd
## k: 1   28.062  10.350
## k: 2   84.347  33.233
## k: 3   27.792   1.793
## k: 4   12.366   1.262
## k: 5   15.222   8.547
## k: 6   39.076  15.930
## k: 7   41.565  23.455
## k: 8  209.304 110.168
## k: 9    7.833   0.114
## k: 10   7.808   0.731
## k: 11  68.820  20.934
## k: 12  15.540   4.704
## - pclass: Multinomial
##       modality 1 modality 2 modality 3
## k: 1       0.000      1.000      0.000
## k: 2       1.000      0.000      0.000
## k: 3       1.000      0.000      0.000
## k: 4       0.000      1.000      0.000
## k: 5       0.000      0.581      0.419
## k: 6       0.000      0.000      1.000
## k: 7       0.565      0.267      0.167
## k: 8       1.000      0.000      0.000
## k: 9       0.000      0.000      1.000
## k: 10      0.000      0.000      1.000
## k: 11      1.000      0.000      0.000
## k: 12      0.000      0.000      1.000
## - parch: Poisson
##       lambda
## k: 1   0.951
## k: 2   0.282
## k: 3   0.000
## k: 4   0.000
## k: 5   0.000
## k: 6   2.569
## k: 7   0.000
## k: 8   1.109
## k: 9   0.000
## k: 10  0.000
## k: 11  0.220
## k: 12  0.744
## ####################################

The plot function displayed the values of criteria, the discriminative power of variables and the parameters of the three most discriminative variable. More variables can be displayed using the nVarMaxToPlot parameter.

plot(resTitanic)
## $criteria ## ##$discrimPowerVar

##
## $proportion ## ##$fare

##
## $pclass ## ##$parch

The most discriminant variable for clustering are fare and pclass. The similarity between variables is shown with the following code:

heatmapVar(resTitanic)

round(computeSimilarityVar(resTitanic), 2)
##          fare  age pclass survived  sex embarked sibsp parch
## fare     1.00 0.37   0.41     0.38 0.37     0.39  0.37  0.37
## age      0.37 1.00   0.59     0.75 0.74     0.72  0.70  0.72
## pclass   0.41 0.59   1.00     0.60 0.57     0.59  0.54  0.54
## survived 0.38 0.75   0.60     1.00 0.82     0.75  0.71  0.72
## sex      0.37 0.74   0.57     0.82 1.00     0.74  0.71  0.72
## embarked 0.39 0.72   0.59     0.75 0.74     1.00  0.69  0.69
## sibsp    0.37 0.70   0.54     0.71 0.71     0.69  1.00  0.71
## parch    0.37 0.72   0.54     0.72 0.72     0.69  0.71  1.00

The greatest similarity is between survived and sex, this relation is well-known in the dataset with a great proportion of women surviving compared to men. On the contrary, there is few similarity between fare and other variables.

Getters are available to easily access some results: getBIC, getICL, getCompletedData, getParam, getProportion, getTik, getPartition, … All these functions use the model maximizing the asked criterion. If results for an other number of classes is desired, the extractMixtCompObject can be used. For example:

getProportion(resTitanic)
##       k: 1       k: 2       k: 3       k: 4       k: 5       k: 6       k: 7
## 0.07729008 0.07538168 0.04770992 0.08874046 0.04770992 0.05438931 0.05438931
##       k: 8       k: 9      k: 10      k: 11      k: 12
## 0.04675573 0.16125954 0.18702290 0.03435115 0.12500000
resK2 <- extractMixtCompObject(resTitanic, 2)
getProportion(resK2)
##      k: 1      k: 2
## 0.4746896 0.5253104

## Step 4: Prediction

Once a model is learnt, one can use it to predict the clusters of new individuals.

resPred <- mixtCompPredict(titanicMCTest, resLearn = resTitanic, nClass = 5, nRun = 3, nCore = 1)

The probabilities of belonging to the different classes and the associated partition is given by:

tik <- getTik(resPred)
head(tik)
##      [,1]      [,2] [,3]         [,4] [,5]
## [1,] -Inf  0.000000 -Inf         -Inf -Inf
## [2,] -Inf -4.381198 -Inf -0.012589273 -Inf
## [3,] -Inf -5.473653 -Inf -0.004204703 -Inf
## [4,] -Inf  0.000000 -Inf         -Inf -Inf
## [5,] -Inf -4.191544 -Inf -0.015238431 -Inf
## [6,] -Inf  0.000000 -Inf         -Inf -Inf
partition <- getPartition(resPred)
head(partition)
## [1] 2 4 4 2 4 2