Tech innovation lab OpenAI recently introduced its new model, Jukebox, that can create music on its own thanks to AI.
The new technology is nothing short of impressive. The OpenAI team used raw audio clips to train the model. Instead of using “symbolic” music, the researchers tapped convolutional neural networks to encode and compress raw sound. Afterwards, they used a transformer to generate new compressed audio that was further unsampled to turn back into raw audio.
Before Jukebox, OpenAI has long been exploring music in tech. Their previous project, MusicNet, was another music-making program that generates songs with as many as 10 different instruments and styles based on a short snippet as reference.
As exciting as Jukebox sounds, however, it’s unlikely to replace humans anytime soon. “While Jukebox represents a step forward in musical quality, coherence, length of audio sample, and ability to condition on artist, genre, and lyrics, there is a significant gap between these generations and human-created music,” OpenAI shared through its blog. “For example, while the generated songs show local musical coherence, follow traditional chord patterns, and can even feature impressive solos, we do not hear familiar larger musical structures such as choruses that repeat.”
Add Comment