SVM with polynomial kernel visualization

This video demonstrates use of the “kernel trick” in AI: how points of two classes that cannot be linearly separated in 2-D space, can become linearly separated by a transformation function into a higher dimension.

The video shows dots scattered in 2D space, some labeled blue, some labeled red. Learning algorithms, e.g. SVM, can use such labeled input to learn which regions of space or blue and which are red, and then predict for a new unlabeled dot whether it’s blue or red. However, SVM, as several other learning algorithms, can only learn a linear division of space. I.e., on a 2D plane, they can only learn to identify to regions separated by single linear border. In a 3D space, they can divide by a single plane, which is also a linear separator, and so on for higher dimensions.

The example shown in the video is such that the blue dots are confined to a circular region. They can’t be linearly separated from the red dots. So SVM should have completely failed in learning from this example. Here’s where the kernel trick comes handy: the dots are first transformed to a higher dimensional space, 3D space in this case. Now a linear division (a plane) can neatly separate the two regions. Transforming back, we see the linear separation on the 3D space becomes a circular border in the original space.

The transformation used is: f([x,y]) = [x,y,(x^2+y^2)]. That, if the original dimensions are x and y, we add a third dimension z=x^2+y^2.

This is a toy example. Even if the input is only 2D, it is usually transformed to much higher dimensions, sometimes even infinite. And the input itself can be much higher than 2D.

Note: this video is a re-make in HD of an older video. Here’s the link to the original video from 2007: http://www.youtube.com/watch?v=3liCbRZPrZA

All videos trail: