June 16, 2023 No Comments

Introduction

In the vast realm of artificial intelligence, deep learning has revolutionized numerous domains, including natural language processing, computer vision, and speech recognition. However, one fascinating area that has captivated researchers and music enthusiasts alike is the generation of music using artificial intelligence algorithms. MusicGen, a state-of-the-art controllable text-to-music model that seamlessly translates textual prompts into captivating musical compositions.

What is MusicGen?

MusicGen is a remarkable model designed for music generation that offers simplicity and controllability. Unlike existing methods such as MusicLM, MusicGen stands out by eliminating the need for a self-supervised semantic representation. The model employs a single-stage auto-regressive Transformer architecture and is trained using a 32kHz EnCodec tokenizer. Notably, MusicGen generates all four codebooks in a single pass, setting it apart from conventional approaches. By introducing a slight delay between the codebooks, the model demonstrates the ability to predict them in parallel, resulting in a mere 50 auto-regressive steps per second of audio. This innovative approach optimizes the efficiency and speed of the music generation process.

MusicGen is trained on 20k hours of licensed music. They also trained it on the internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data.

Pre-requisites:

As per the official MusicGen GitHub repo https://github.com/facebookresearch/audiocraft/tree/main.

  • Python 3.9
  • PyTorch 2.0.0
  • GPU with at least 16 GB of memory 

Available MusicGen models

There are 4 pre-trained models available and they are as follows:

  • small: 300M model, Text to music only 
  • medium: 1.5B model, Text to music only
  • melody: 1.5B model, Text to music and text+melody to music
  • large: 3.3B model, Text to music only

Experiments

Below is the output of the Conditional music generation using the MusicGen large model.

				
					Text Input: Jingle bell tune with violin and piano
				
			
				
					Output: (Using MusicGen "large" model)
				
			

Below is the output of the MusicGen “melody” model. We used the above audio and text input to generate the following audio.

				
					Text Input: Add heavy drums drums and only drums
				
			
				
					Output: (Using MusicGen "melody" model)
				
			

How to setup the MusicGen on Colab

Make sure you are using GPU for faster inference. It took ~9 minutes to generate 10 seconds of audio using CPU whereas using GPU(T4) it took just 35 seconds.

  • Before starting make sure torch and torchaudio are installed in the colab.
Install the audiocraft library from Facebook.
				
					!python3 -m pip install -U git+https://github.com/facebookresearch/audiocraft#egg=audiocraft
				
			
Import necessary libraries.
				
					from audiocraft.models import musicgen
from audiocraft.utils.notebook import display_audio
import torchfrom audiocraft.data.audio import audio_write
				
			
Load the model
The list of models is as follows:
				
					# | model types are => small, medium, melody, large |
# | size of models are => 300M, 1.5B, 1.5B, 3.3B |
model = musicgen.MusicGen.get_pretrained('large', device='cuda')
				
			
Set the parameters (Optional)
				
					model.set_generation_params(duration=60) # this will generate 60 seconds of audio.
				
			
Conditional Music Generation (Generate the music by providing text.)
				
					model.set_generation_params(duration=60)
res = model.generate( [ 'Jingle bell tune with violin and piano' ], progress=True)
# This will show the music controls on the colab
				
			
To generate unconditional music
				
					res = model.generate_unconditional( num_samples=1, progress=True)
# this will show the music controls on the screendisplay_audio(res, 16000)
				
			
To generate music continuation

To create music continuation we will need an audio file. We will feed that file to the model and the model will generate and add more music to it.

				
					from audiocraft.utils.notebook import display_audio
import torchaudio


path_to_audio = "path-to-audio-file.wav"
description = "Jazz jazz and only jazz"


# Load audio from a file. Make sure to trim the file if it is too long!
prompt_waveform, prompt_sr = torchaudio.load( path_to_audio )
prompt_duration = 15
prompt_waveform = prompt_waveform[..., :int(prompt_duration * prompt_sr)]
output = model.generate_continuation(prompt_waveform, prompt_sample_rate=prompt_sr,
descriptions=[ description ], progress=True)
display_audio(output, sample_rate=32000)
				
			
To generate melody conditional generation
				
					model = musicgen.MusicGen.get_pretrained('melody', device='cuda')


model.set_generation_params(duration=20)


melody_waveform, sr = torchaudio.load("path-to-audio-file.wav")
melody_waveform = melody_waveform.unsqueeze(0).repeat(2, 1, 1)
output = model.generate_with_chroma(
descriptions=['Add heavy drums'], melody_wavs=melody_waveform, melody_sample_rate=sr,progress=True)
display_audio(output, sample_rate=32000)
				
			

Write the audio file to the disk.

If you want to download the file from the colab then you will need to write the wav file on the disk. Here is the function that will write a wav file onto the disk. It will take the model output as a first input and the filename as a second input.

				
					def write_wav(output, file_initials):
    try:
        for idx, one_wav in enumerate(output):
        audio_write(f'{file_initials}_{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)
        return True
    except Exception as e:
        print("error while writing the file ", e)
        return None


# this will write a file that starts with bollywood
write_wav(res, "audio-file")
				
			
Full Implementation (Google colab file link)

Full implementation of the Meta’s MusicGen library by Pragnakalp Techlabs is given in the colab file. Feel free to explore and create music using it.
Pragnakalp Techlabs | Meta’s MusicGen Implementation 

Conclusion

In conclusion, Audiocraft’s MusicGen is a powerful and controllable music generation model. Looking ahead, Audiocraft holds exciting future potential for advancements in AI-generated music. Whether you’re a musician or an AI enthusiast, Audiocraft’s MusicGen opens up a world of creative possibilities.

Write a comment

Your email address will not be published. Required fields are marked *

Want to talk to an Expert Developer?

Our experts in Generative AI, Python Programming, and Chatbot Development can help you build innovative solutions and scale your business faster.