Anton V Goldberg
3 min readAug 31, 2020

Abstract artists have real styles!

I always thought that there are not many abstract painters with very well defined styles differentiating their art. For example, Larionov or Kandinsky’s styles are very distinct while Malevich or Feininger’s are not. It looked to me like the majority of abstract artists demonstrated the supremacy of form over substance, thus making their paintings virtually indistinguishable from each other. At the end of the day, I thought, it’s easy to pretend to be an art expert: even if, as an “expert” you can’t distinguish between the styles, you can go for “overfitting” as you only need to memorize a few thousand paintings.

On a (so far totally) unrelated note, I think Machine Learning (ML) reached the point a while back where anyone can grab off-the-shelf software to solve a problem if such a problem has been solved with ML before. For example, if there is a recipe, you don’t have to be a data scientist or have a PhD in ML to implement that recipe within your own environment with a very high degree of success. Looking for people who share my view, I came across fast.ai and their ML course.

The first part of this course demonstrates how anyone can write 6 lines of code to classify images of black/grizzly/teddy bears. The course invites anyone to try the techniques in their own problem-space and people do. There are examples of converting sound into pictures and using the technique to classify that, of plotting a user’s journey through a website and using such traces to find bad actors and so on. This gave me the idea of checking if a computer can recognize abstract painters and classify their pictures. After all, if a computer can do it, I’ll be proven wrong but I’ll learn more about abstract art.If ML fails, I have proof positive that abstract art is all the same! I couldn’t lose.

With this in mind, I picked 7 famous abstract and semi-abstract painters whose works I have trouble distinguishing: Feininger, James McNeill Whistler, JMW Turner, Kandinsky, Larionov, Leger, and Malevich. I downloaded 10 masterpieces by each painter and fed them into Resnet34 pre-trained on ImageNet using fast.ai’s API. Of course, I could not place all 10 works by each painter into the training set: I left 20% for the validation set. Less than a minute later, I had the following confusion matrix:

The result surprised me. Despite my own belief in the power of off-the-shelf solutions I didn’t expect such a high degree of success on such a small data set: just 8 pictures of each painter to train and the model already happily classified almost 70% of the validation set! I decided to add more art and see what would happen. 10 more works of each artist, some tuning of the learning rate, cutting training to reduce overfitting, and the results improve to:

The recognition increased to 81%. While this definitely disproves my original hypothesis about indistinguishability of abstract art I am very happy to discover how little data one actually needs to get a working ML solution using modern ML technology and a pre-trained model (aka “transfer learning”).