Recent advances in artificial intelligence have enabled spectacular advances in many areas. Less publicized for music, AI applications are no less important. They therefore raise many questions: can AI match the best composers? Will it one day replace artists? Is this the future of music?
To answer these questions, we will briefly retrace the history of theartificial intelligence in music, then with “Angelia”, an AI dedicated to electronic music, we will highlight its challenges and perspectives.
Since ancient times, the links between music and algorithms have aroused the interest of scientists and composers
Music has always interested scientists for its formal qualities and, vice versa, musicians have found many sources of inspiration in science. The idea of using instructions to create music is evoked by the Greeks since Antiquity, in particular by Pythagoras (VIe century BC). How not to quote Jean-Sébastien Bach (1685-1750) who used mathematical and geometric methods, or even Wolfgang Amadeus Mozart (1756-1791) and his generation of musical pieces by recombination of fragments drawn with dice (Musical dice game, 1787).
On the side of scientists, Jacques de Vaucanson (1709-1782) created in 1738 two automates musicians capable of actually playing their instrument. If the Vaucanson automatons have disappeared, those made by Jaquet-Droz and Leschot in Switzerland are still visible at the Neuchâtel Museum. The musician is probably the best example. The automaton actually plays the organ, thanks to its cylinder where its repertoire is “programmed”, thus prefiguring the programming. robots. Let us also quote the ” Tymapon player By Peter Kintzing, visible at the Musée des Arts et Métiers in Paris.
Ada Lovelace (1815-1852), who assisted Charles Babbage in the design of his “analytical machine”, was the first to program a algorithm on a calculator, explicitly citing musical composition as a potential application.
Computers and artificial intelligence appear in the 1950s, already with musical experiments
The first piece of computer-generated music was by Lejaren Hiller and Leonard Isaacson at the University of Illinois in 1955 on the Illiac (Illinois Automatic Computer).
As early as 1956, Iannis Xenakis (1922-2001) created “compositions stochastic Using a mathematical approach, then he continued from the 1960s by developing musical programs in Fortran.
The first one International Computer Music Conference held at Michigan State University in 1974. Gradually, AI for music became an active area of academic research.
In 1980, David Cope of the University of California at Santa Cruz developed EMI (Experiments in Musical Intelligence), a Lisp program capable of composing musical pieces in styles by Bach, Mozart, Brahms and many others. EMI included a large database of style descriptions, recombination rules, and different composition strategies.
Neural networks and deep learning pave the way for new applications for music
In 1997, a dedicated research team was initiated by François Pachet within the Computer Science Laboratory from Sony in Paris. In 2016, they created the event with ” Daddy’s, Car“, A Beatles-style track composed with” Flow Machines », Then arranged and performed by Benoît Carré.
With the advances in AI during the 2010s, new applications recognition or production are developed. Let us quote the Magenta project of Google based on TensorFlow for example, or even the AIVA service (Artificial Intelligence Virtual Artist) accessible by subscription. AI-based composition and production support tools are gradually appearing in music production platforms (Digital Audio Workstations).
Artists are also starting to use AI in creative ways, like Holly Herndon or Actress.
Music: a complex sound phenomenon that creates pure emotion
The most common approach is to train a network of neurones with a large number of scores by a composer. Once the learning is complete, the network is able to produce a series of notes that correspond to the composer’s style if given a few starting notes. In a way, the neural network predicts the most likely sequence given what it has learned. This gives awesome results for a few seconds, but over a longer time it quickly gets boring. With Frédéric Chopin, for example, we obtain a statistically correct series of notes, effectively evoking his style, but which never manages to make us feel the romantic emotion characteristic of the Polish composer.
The main difficulty lies in the tangled structure of musical works. The different levels (notes, measures, sections, to name only the most obvious) weave relationships locally and globally. If current neural networks manage to resolve “voltage-resolution” links at the level of a few measurements, the management of longer correlations remains problematic.
In addition, the musicis like pure emotion. For Kant, it is “the language of the emotions”. Now, by construction , an AI is totally devoid of emotion. “She” doesn’t feel anything. Moreover, most “hear nothing”, “they” are “deaf” and can only produce statistical series of numbers without any artistic intention.
“Angelia”, an emotional artificial intelligence for electronic music
Angelia is a research project aimed at creating a artificial intelligencededicated to electronic music. This name is a contraction of ” Angel » (ange) and the acronym IA, for artificial intelligence.
An important point of the Angelia project is to put people at the center. Too often, its place is quite simply forgotten in artificial intelligence projects or robotic . Therefore, an AI for music must increase the creative capacities of an artist and not implicitly seek to replace him. It is therefore imperative that this AI be interactive and that it can fit into the composer’s creative process.
To interact, you need a common language. The musichas its own language, in the form of scores known to all. A long time ago, musicians offered more graphic and flexible alternatives, such as Iannis Xenakis or even Brian Eno. But these representations are ill-suited to treatments algorithmiques .
Instead, Angelia uses an “algorithmic language of musical composition” that is both understandable to an author and interpretable by AI. This approach also allows the ‘ live coding», A musical trend that has been emerging in recent years where we play musiclive with computer code.
Angelia does not rely on a single “miraculous” technology that would be able to do it all. Its musical programming language allows the use of different algorithms from AI research: procedural, stochastic, cellular automata, neural network, algorithm genetic , etc. The common feature remains that of a bio-inspired approach, that is to say that finds its inspiration in natural processes and living things. An important part of the development is also to “feed” the knowledge base with data corpus from great classical composers, but also from jazz.
A second important characteristic of the Angelia project is to integrate emotion at the heart of its approach. If an AI cannot feel emotions, it is nevertheless possible to design an architecture where the AI perceives its sound environment. The data obtained can be analyzed in real time to generate stimuliwhich evolve a ” metabolismemotional ”, a kind of simulation of our limbic system . This in turn will influence certain parameters modifying the expressiveness of the performance, or the choice of a particular melodic line. We can imagine in the future extending this perception of the environment to “listen” to other musicians and the public, or even to increase the capacity of perception thanks to artificial vision.
Towards augmented musical instruments: the “hyperinstruments”
To generate the music, Angelia is paired with an electronic instrument, such as a pianodigitalfor example. The whole then becomes a kind of augmented instrument.
In performances, I use a modular synthesizer instead. The first were invented in the 1960s, in parallel by Robert Moog and Donald Buchla, respectively on the east and west coasts of the United States. The advantage is to be able to build your own instrument by choosing the modules that compose it: oscillators, samplers, filters, envelopes, etc., all controlled by voltage and interconnectable via cables. It is an impressive instrument, fully reconfigurable.
Coupled with Angelia, it then becomes a “hyperinstrument” which exceeds the capacities of classical instruments (in the sense defined by S. Casanelles), thus making it possible to explore new “hyperrealist” territories. It comes alive and becomes almost organic. Composing and playing with such an instrument is then very stimulating.
An emotional AI dedicated to electronic music represents an ambitious research challenge, still under development. Several albums have been produced which show his evolution. Unlike replacing artists, AI can therefore be a “music companion” which increases the artist’s creative capacity and opens the way to new musical perspectives.
To follow Angelia’s news:
- Instagram @angelia.heudin
You will also be interested
Interested in what you just read?