In the second part of the programming task, you asked to plot the ten-fold cross-validation training and test errors.
I am not sure I've understood the difference between the training and the test errors-
does the training errors refer to the error we have on the 9/10 sets on which we train (meaning the samples which are on the wrong side of the hyper-plane)
while the test errors refer to the errors on the last set?
The main issue is that libsvm has a tool for cross-validation which calculate only the average error on the test set.
Should we implement cross-validation by ourselves to get the training error too?