Remember Melodyne’s Direct note access – the program that perfectly analyzes chords and separates them into single notes? We covered that story back in April and I remember being blown away by the whole concept and process. For software to do a spectral analysis of a sound-wave and split it into separate notes is a huge step for music-computing – it allows one to take a finished song and say “hey, lets try adding a seventh minor chord to the guitar instead of a plain ol’ minor and see how the song sounds”.
Well now things are getting really crazy as two researchers form the University of Southern California have devised some software capable of creating complex arrangements from a simple melody. As if that wasn’t enough, they made it possible to choose the style of harmony according to presets of bands like Radiohead, Keane and Green Day (for the moment songs from all three artists have served as guinea pigs for the experiment). They call it the ASSA (Automatic Style Specific Accompaniment). After analyzing just three or four songs it is able to produce a backing for any song in the style of a chosen band. (If you wish to have a more detailed description of the system and it’s processes, read this article)
Results are pretty impressive. Check out these samples to compare both the original and the re-composed versions of a song. Here is a video of Radiohead’s Creep comparing the original accompaniment with a version automatically generated by ASSA. Not only does the system determine the exact notes and harmony with great precision but also the general feel of the song.
The project is the brain-child of Elaine Chew, an accomplished pianist who is a professor at the USC Viterbi School Department of Industrial and Systems Engineering, and graduate student Ching-Hua who had played in different bands before obtaining her PhD in computer science.
Where is all of this going? Will music will be massively produced by those who master such programs in a near-distant future? A great number of musical niches are already using electronic machines that bypass the need for real musicians, but even if they can replace human beings up to a certain extent, they still need someone to program them. With ASSA, there’s one less hurdle in the creation process – all we would need to create a coherent musical piece is a structure, and let AI do the rest.
Like that Beamz lazer music system we talked about not long ago, this potentially allows anyone to easily make music, regardless of their musical-competence or imagination. I’m not saying this is a bad thing, quite on the contrary we need to embrace technology and go with the flow. But it seems like it’s only a matter of time before robots take control and use their human counter-parts as roadies.