Setting up model training as a function taking a model parameter allows us to try and compare different model architectures.
Here we train a linear based model:
model = sensor_classification.LinearModel(input_dim=X_train.shape[1] * X_train.shape[2],
output_dim=len(np.unique(y_train)))
val_df = sensor_classification.train_gesture_classification(model, X_train, X_val, y_train, y_val)
And a convolution based model:
model = sensor_classification.ConvModel(input_dim=(X_train.shape[1], X_train.shape[2]),
output_dim=len(np.unique(train_y)))
val_df = sensor_classification.train_gesture_classification(model, X_train, X_val, y_train, y_val)
See test code. Model definition code.
We compare on overall accuracy and on confusion matrices to see the types of errors the models make.


The overall accuracy was the same for the two models. So the model architectures are similarly capable of classifying the gestures. The confusion matrices let us see where the errors are being made. There were many mis-classifications of the supination gesture. We should also look at the input training data.


Errors were primarily for the supination and shake gestures and the model architecture didn’t change this. Looking at the input data sets we see there was the least training data for the supination and shake gestures. After seeing the confusion matrices and data set bar plots we see that we need to collect more input data for the supination and shake gestures to train accurate models. The visualizations are useful for planning our next steps.