Zum Hauptinhalt

Self-organizing maps (SOMs) as an AI tool for music analysis and production

Dr. Simon Linke

1. Introduction

When talking about music we are usually biased by our musical experiences, education and, of course, our tastes. This may lead to fruitful discussions in our daily lives but it is a serious problem when systematically analyzing music and its perception. Artificial intelligence (AI) may be a solution. If the algorithms are not trained AI doesn't know anything about music—a little bit like the unbiased brain of a newborn baby. If training data is chosen carefully, a neutral and objective view, on rather emotional musical topics, can be provided.

Most AI models we talk about nowadays, like ChatGPT, are so-called connectionist models. They process information using complex networks of artificial neurons. Yet while they lead to surprisingly sophisticated results, we do not yet know why they make certain decisions. They may work very well in suggesting music we like, or even composing music for us, but they usually fail when trying to understand the complex physical and psychological foundations of music.

This talk is of an alternative artificial intelligence approach called self-organizing maps (SOMs). As they are not very present in today's media, they are usually not very familiar to most people. Nevertheless, depending on the use case, they have great benefits compared to other AI algorithms.

The approach of self-organizing maps differs from connectionist models. The 'map' illustrates how the neuron field organizes itself during learning by placing similar stimuli close to each other. Thus, the learning process becomes transparent and how results are derived can be analyzed. One can judge the influence of each single data parameter. Self-organizing maps are very helpful in data browsing, clustering and classification. In later sections, however, we will also discover a few examples of how these algorithms may be creatively used.