Skip to content

Process to train a GPT-2 model from scratch using Hugingg Face. The dataset is built from the Mutopia Project.

License

Notifications You must be signed in to change notification settings

juancopi81/MMM_Mutopia_Guitar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MMM Mutopia Guitar

Tutorial to train a GPT-2 model from scratch using Hugingg Face and publishing it as a Gradio demo using Spaces. The model generates guitar music. For encoding the guitar MIDI files of the Mutopia Project, I am using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer.

By the end of this tutorial, you should have a Gradio demo similar to this one.

To start the tutorial, please visit the first notebook: 1. Collecting the data.

The dataset is built from the Mutopia Project.

This tutorial is a work in progress. You can take a look at Hugging Face at the following:

After generating the music in the Hugging Face widget, you can listen to the results using this notebook.

About

Process to train a GPT-2 model from scratch using Hugingg Face. The dataset is built from the Mutopia Project.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published