MUGEN – A MUSIC GENERATOR
- Version
- Download 8
- File Size 528.01 KB
- File Count 1
- Create Date 9 May 2025
- Last Updated 9 May 2025
MUGEN – a music generator
Ashmit Agarwal, Arjun Agarwal, Ashish Ansh, Ayush Saini, Manya Agarwal
Department of C.S.E
MIT, Moradabad, U.P., INDIA
Abstract. Music composition is a creative yet complex process that demands an understanding of musical theory, structure, and emotional dynamics. For many, the barriers to composing original melodies—such as lack of training or access to musical tools—limit creative expression. With the advancement of artificial intelligence and deep learning, especially in the field of sequence modeling, new opportunities have emerged for automated and accessible music generation. This study introduces MuGen, a real-time music generator that leverages a custom Transformer-based model trained on symbolic music data, including MIDI and Kern files. MuGen is designed to generate monophonic melodies with structural coherence and stylistic diversity. In this work, we detail the architecture of the model, including its encoder-decoder structure, self-attention mechanisms, and positional encoding, optimized for musical sequence prediction. The system also provides a web-based interface allowing users to input melody parameters, upload seed MIDI files, and download generated outputs in MIDI format. We evaluate MuGen’s performance using standard metrics such as BLEU score, perplexity, and diversity index, and compare its capabilities to other models like Music Transformer and MuseNet. Results demonstrate that MuGen delivers a balance of accuracy, speed, and user interactivity, making it a practical tool for musicians, students, and content creators. Future enhancements include polyphonic generation, genre conditioning, and emotion-aware composition for more personalized musical outputs.
Keywords: Music generation, Transformer model, deep learning, symbolic music, melody generation, AI composition.
Download