The SVM - support vector machine method, (V. Vapnik, 1995, B. Scholkopf and A. J. Smola, 2002) finds the two parallel hyperplanes that separate the data, aside from outliers. The SVM finds the two hyperplanes having maximal separation.
The MPM - minimax probability machine method (Lanckriet et al, 2002) finds an optimal separating hyperplane that minimizes the probability of misclassification. MPM uses a clever probability theorem (A. W. Marshall, I. Olkin, 1960) recast into an optimization framework by (D. Bertsimas, I. Popescu, 2000).
Both the SVM and MPM methods make use of the "kernel trick" (Boser, Guyon, Vapnik, 1992), to transform a linearly inseparable problem into a higher dimensional space in which the problem is linearly separable.
The paper also investigates the sensitivity of the solution to changes in the features used to represent the data.
Supplementary URL: